Redis Cluster集群(三)

Redis Cluster 三

  • Redis Cluster故障处理过程& 原理
    • 故障发现
    • 故障恢复
  • 搭建 Redis Cluster集群
    • 配置 redis.conf
    • 多节点单机部署
      • 依次启动 Redis:
      • 构建集群:
  • Spring boot Lettuce环境配置
    • Redis工具类参考

  • 区别与 Sentinel架构, Redis Cluster集群没有中心节点, 也就是包含多个主节点, 当数据量庞大至单一主节点架构遇到性能瓶颈时(如:过多的写操作, 单节点所管理的键值数过于庞大等), 通过集群多主特性分散存取压力, 提高整体性能

  • 区别与 Twemproxy代理分片中间件, Twemproxy是通过一致性哈希分区的, 还有就是单一实例的. Redis Cluster集群是哈希槽(Slot)分区, 且可以通过任何主节点连接存取数据

  1. 一致性哈希分区原理: 将所有的键当做一个 Token环, 环内范围是0~2的32次方. 然后为每个节点分配一个 Token范围值, 当存储数据时, 先将键进行哈希运算后, 锁定 Token范围同时按顺时针方向选择离最近的节点保存. 一般用于节点较多的场景, 需要注意的是当扩容或收缩节点时, 通常是翻倍伸缩, 将已分配的 Token范围重新计算后导致数据进行迁移的量缩到最小(50%以内), 否则可能会导致全量迁移

  2. Redis Cluster的哈希槽分区是预设哈希槽的范围为0到16383, 把16384个槽按照节点数量进行平均分配, 由节点自行管理, 当存储数据时, 先将键按照 CRC16校验后对16384取模来决定放置哪个槽, 集群中的每个节点负责部分哈希槽, 如果收到其它节点管理的哈希槽范围内的键, 则会返回 Moved重定向异常或 Ask异常(槽在迁移中)同时返回指定节点信息, 此部分客户端会自动向目标节点再请求. 此结构能很容易扩容或收缩节点, 同时不会造成集群不可用的状态

Redis Cluster故障处理过程& 原理

  • 节点之间通过 Gossip协议通讯的, Gossip协议是去中心化, 容错并保证最终一致性的协议

故障发现

  • 主观下线: 一个节点对另一个节点的判断
  1. 节点1定期 Ping消息给节点2
  2. 如成功, 表示节点2正常运行将会响应 Pong消息, 同时节点1更新与节点2的最后通信时间
  3. 如失败, 则节点1将等下一次定时周期继续 Ping消息到节点2
  4. 如果节点1与节点2的最后通信时间超过 cluster-node-timeout, 节点1将会把节点2标识为 Pfail状态
  • 客观下线: 半数以上持有槽的主节点对标记某节点主观下线判断
  1. 当某个节点收到其它节点发送的 Ping消息时, 如果消息中包含了其它节点的 Pfail状态, 这个节点将会把此状态消息(主观下线), 加到自己的故障列表中
  2. 然后对故障节点尝试客观下线操作

故障恢复

  • 从节点成为主节点的条件
  1. 偏移量(offset)最大的从节点优先
  2. 最后交互时间太旧, 并超过了设定的值, 则不会尝试故障转移
  3. 为了达到最大限度的高可用性可将 cluster-replica-validity-factor设置0, 意思为即使数据太旧的副本也始终尝试故障转移
  • 故障转移
  1. 符合条件的多个从节点, 进行选举投票, 选出新的主节点

搭建 Redis Cluster集群

配置 redis.conf


# 开启集群
cluster-enabled yes

# 指定集群配置信息文件名(自动生成), 不用手动配置
cluster-config-file nodes-6379.conf

# 节点的超时毫秒数
cluster-node-timeout 15000

# 1. 判断从节点与主节点失联时间是否超过: (cluster-node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period, 如 cluster-node-timeout为30秒 cluster-replica-validity-factor为10秒 repl-ping-replica-period为10秒时, 断开链接超过了310秒, 则不会请求成为主节点
# 2. 为了达到最大限度的高可用性可以设置0, 意思为即使从节点与主节点不管失联多久都可以提升为主节点
cluster-replica-validity-factor 10

# 那些已分配完从节点的主节点, 仍然剩余(cluster-migration-barrier)个从节点的主节点将会触发节点分配, 默认值为1
cluster-migration-barrier 1

# 1. 当集群检测到至少有1个槽(slot)不可用时, 默认是集群将停止服务(默认是yes)
# 2. 当部分槽不可用时, 还是希望集群提供服务, 设置为no
cluster-require-full-coverage no

# 主节点失效期间, 从节点将不被允许成为主节点
cluster-replica-no-failover no

# 节点故障或网络分区期间不需要数据的一致性, 只要节点上拥有指定数据就可以读取(默认是no)
# cluster-allow-reads-when-down no

多节点单机部署

  • 创建六个目录 6379,6380,6381,6382,6383,6384, 各目录内拷贝环境 redis.conf

# ./6379/redis.conf
port 6379
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

# ./6380/redis.conf
port 6380
cluster-enabled yes
cluster-config-file nodes-6380.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

# ./6381/redis.conf
port 6381
cluster-enabled yes
cluster-config-file nodes-6381.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

# ./6382/redis.conf
port 6382
cluster-enabled yes
cluster-config-file nodes-6382.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

# ./6383/redis.conf
port 6383
cluster-enabled yes
cluster-config-file nodes-6383.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

# ./6384/redis.conf
port 6384
cluster-enabled yes
cluster-config-file nodes-6384.conf
cluster-node-timeout 15000
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-require-full-coverage no

依次启动 Redis:


../src/redis-server ./6379/redis.conf;
../src/redis-server ./6380/redis.conf;
../src/redis-server ./6381/redis.conf;
../src/redis-server ./6382/redis.conf;
../src/redis-server ./6383/redis.conf;
../src/redis-server ./6384/redis.conf;

构建集群:

  1. redis-trib.rb已废弃, 目前无法继续使用

$ ../src/redis-trib.rb create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384
WARNING: redis-trib.rb is not longer available!
You should use redis-cli instead.

All commands and features belonging to redis-trib.rb have been moved
to redis-cli.
In order to use them you should call redis-cli with the --cluster
option followed by the subcommand name, arguments and options.

Use the following syntax:
redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS]

Example:
redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1

To get help about all subcommands, type:
redis-cli --cluster help

  1. 新方式为使用 redis-cli构建

$ ../src/redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6383 to 127.0.0.1:6379
Adding replica 127.0.0.1:6384 to 127.0.0.1:6380
Adding replica 127.0.0.1:6382 to 127.0.0.1:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
M: d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
M: 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
S: fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382
   replicates 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc
S: db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383
   replicates 869d1dbc01d40028d7b8556342f2fbe814dc2093
S: 38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384
   replicates d431718be63c7f5d0803dd8984092ddda63a712f
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383
   slots: (0 slots) slave
   replicates 869d1dbc01d40028d7b8556342f2fbe814dc2093
S: 38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384
   slots: (0 slots) slave
   replicates d431718be63c7f5d0803dd8984092ddda63a712f
M: 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382
   slots: (0 slots) slave
   replicates 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc
M: d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

  1. 查询集群结点信息

# 可以选择任意节点查询 [-c 集群方式][-h 指定ip地址] [-p 指定端口]
$ ../src/redis-cli -c -h 127.0.0.1 -p 6379
127.0.0.1:6379> cluster nodes
db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383@16383 slave 869d1dbc01d40028d7b8556342f2fbe814dc2093 0 1594132440000 5 connected
38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384@16384 slave d431718be63c7f5d0803dd8984092ddda63a712f 0 1594132440000 6 connected
6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381@16381 master - 0 1594132441049 3 connected 10923-16383
fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382@16382 slave 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 0 1594132440034 4 connected
869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379@16379 myself,master - 0 1594132438000 1 connected 0-5460
d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380@16380 master - 0 1594132439016 2 connected 5461-10922

# 查询集群状态信息
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:790
cluster_stats_messages_pong_sent:833
cluster_stats_messages_sent:1623
cluster_stats_messages_ping_received:828
cluster_stats_messages_pong_received:790
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1623

  1. 新加节点& 分配槽

# 启动6385端口 Redis
../src/redis-server ./6385/redis.conf;

# 已废弃方法, 目前无法继续使用
$ ../src/redis-trib.rb add-node 127.0.0.1:6385 127.0.0.1:6379

# 新方式使用 redis-cli
# 127.0.0.1:6385为将要加入的新节点, 127.0.0.1:6379表示目前要加入的集群内任意一个节点
$ ../src/redis-cli --cluster add-node 127.0.0.1:6385 127.0.0.1:6379
>>> Adding node 127.0.0.1:6385 to cluster 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382
   slots: (0 slots) slave
   replicates 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc
S: db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383
   slots: (0 slots) slave
   replicates 869d1dbc01d40028d7b8556342f2fbe814dc2093
S: 38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384
   slots: (0 slots) slave
   replicates d431718be63c7f5d0803dd8984092ddda63a712f
M: 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6385 to make it join the cluster.
[OK] New node added correctly.

# 查询集群结点信息, 发现新加入的6385成为了主节点, 但是未分配到槽(slot), 需要对集群手动进行重新分片迁移数据
$ ../src/redis-cli -c -h 127.0.0.1 -p 6379
127.0.0.1:6379> cluster nodes
fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382@16382 slave 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 0 1594214553849 4 connected
db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383@16383 slave 869d1dbc01d40028d7b8556342f2fbe814dc2093 0 1594214549000 5 connected
38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384@16384 slave d431718be63c7f5d0803dd8984092ddda63a712f 0 1594214551791 6 connected
6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381@16381 master - 0 1594214550000 3 connected 10923-16383
27b1c957eac80f46f87aaa9dcb9b007c50218da8 127.0.0.1:6385@16385 master - 0 1594214552819 0 connected
869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379@16379 myself,master - 0 1594214552000 1 connected 0-5460
d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380@16380 master - 0 1594214551000 2 connected 5461-10922

# 给指定 node ID分配槽(slot), 127.0.0.1:6379表示指定集群内的任意一个节点
$ ../src/redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: 869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382
   slots: (0 slots) slave
   replicates 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc
S: db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383
   slots: (0 slots) slave
   replicates 869d1dbc01d40028d7b8556342f2fbe814dc2093
S: 38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384
   slots: (0 slots) slave
   replicates d431718be63c7f5d0803dd8984092ddda63a712f
M: 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 27b1c957eac80f46f87aaa9dcb9b007c50218da8 127.0.0.1:6385
   slots: (0 slots) master
M: d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096 (16384/4主节点数)
What is the receiving node ID? 27b1c957eac80f46f87aaa9dcb9b007c50218da8 (新加入的节点id)
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all (询问重新分片的源节点, 输入 all可以在各个源节点中取出部分槽, 凑够4096个, 然后移动到指定的节点上)

Ready to move 4096 slots.
  Source nodes:
    M: 869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
  Destination node:
    M: 27b1c957eac80f46f87aaa9dcb9b007c50218da8 127.0.0.1:6385
       slots: (0 slots) master
  Resharding plan:
    Moving slot 5461 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5462 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5463 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5464 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5465 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5466 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5467 from d431718be63c7f5d0803dd8984092ddda63a712f
    Moving slot 5468 from d431718be63c7f5d0803dd8984092ddda63a712f
    ...
    ...
Do you want to proceed with the proposed reshard plan (yes/no)? yes (开始执行分片操作)
Moving slot 5461 from 127.0.0.1:6380 to 127.0.0.1:6385: 
Moving slot 5462 from 127.0.0.1:6380 to 127.0.0.1:6385: 
Moving slot 5463 from 127.0.0.1:6380 to 127.0.0.1:6385: 
Moving slot 5464 from 127.0.0.1:6380 to 127.0.0.1:6385: 
...
...

# 查询集群槽分配情况
$ ../src/redis-cli -c -h 127.0.0.1 -p 6379
127.0.0.1:6379> cluster nodes
fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382@16382 slave 6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 0 1594216609000 4 connected
db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383@16383 slave 869d1dbc01d40028d7b8556342f2fbe814dc2093 0 1594216607000 5 connected
38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384@16384 slave d431718be63c7f5d0803dd8984092ddda63a712f 0 1594216607000 6 connected
6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381@16381 master - 0 1594216608400 3 connected 12288-16383
27b1c957eac80f46f87aaa9dcb9b007c50218da8 127.0.0.1:6385@16385 master - 0 1594216609423 7 connected 0-1364 5461-6826 10923-12287
869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379@16379 myself,master - 0 1594216610000 1 connected 1365-5460
d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380@16380 master - 0 1594216610439 2 connected 6827-10922

# 尝试主节点删除操作, 拥有槽的节点是无法删除的, 想要删除需要先将槽移动到其它节点
$ ../src/redis-cli --cluster del-node 127.0.0.1:6379 27b1c957eac80f46f87aaa9dcb9b007c50218da8
>>> Removing node 27b1c957eac80f46f87aaa9dcb9b007c50218da8 from cluster 127.0.0.1:6379
[ERR] Node 127.0.0.1:6385 is not empty! Reshard data away and try again.

# 尝试主节点发送参数 DEBUG SEGFAULT, 让其关闭 Redis服务
$ ../src/redis-cli -c -h 127.0.0.1 -p 6379 debug segfault
$ ../src/redis-cli -c -h 127.0.0.1 -p 6381 debug segfault
=== REDIS BUG REPORT START: Cut & paste starting from here ===
857:M 08 Jul 2020 22:16:38.342 # Redis 6.0.5 crashed by signal: 11
857:M 08 Jul 2020 22:16:38.342 # Crashed running the instruction at: 0x103ed601e
...
...
=== REDIS BUG REPORT END. Make sure to include from START to END. ===

       Please report the crash by opening an issue on github:

           http://github.com/antirez/redis/issues

  Suspect RAM error? Use redis-server --test-memory to verify it.

Segmentation fault: 11

# 查询集群结点信息, 可以看到其它节点的从节点也会为没有从节点的主节点做故障恢复
$ ../src/redis-cli -c -h 127.0.0.1 -p 6380
127.0.0.1:6380> cluster nodes
fbe3cd549b5bbe777608410df1b673558c652696 127.0.0.1:6382@16382 master - 0 1594217827000 9 connected 12288-16383
6875cbc6bc72ecddcd367b18ebad0c2c0bd2a4fc 127.0.0.1:6381@16381 master,fail - 1594217801401 1594217797000 3 disconnected
38478b553514762eb03da7b5b79cf9b9f988ecbd 127.0.0.1:6384@16384 slave d431718be63c7f5d0803dd8984092ddda63a712f 0 1594217829216 6 connected
db07d2d3cf11b6c57b6f3144a009ba0ea894d38c 127.0.0.1:6383@16383 master - 0 1594217828199 8 connected 1365-5460
27b1c957eac80f46f87aaa9dcb9b007c50218da8 127.0.0.1:6385@16385 master - 0 1594217828000 7 connected 0-1364 5461-6826 10923-12287
869d1dbc01d40028d7b8556342f2fbe814dc2093 127.0.0.1:6379@16379 master,fail - 1594217580902 1594217573361 1 disconnected
d431718be63c7f5d0803dd8984092ddda63a712f 127.0.0.1:6380@16380 myself,master - 0 1594217825000 2 connected 6827-10922

# 直到无可用从节点做故障恢复(代替主节点)时, 集群崩溃
$ ../src/redis-cli -c -h 127.0.0.1 -p 6372 debug segfault
$ ../src/redis-cli -c -h 127.0.0.1 -p 6383 debug segfault
863:S 08 Jul 2020 22:26:05.641 # Cluster state changed: fail

  1. 集群相关命令& 参数

$ ../src/redis-cli --cluster help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas 
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
                 --cluster-fix-with-unreachable-masters
  reshard        host:port
                 --cluster-from 
                 --cluster-to 
                 --cluster-slots 
                 --cluster-yes
                 --cluster-timeout 
                 --cluster-pipeline 
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight 
                 --cluster-use-empty-masters
                 --cluster-timeout 
                 --cluster-simulate
                 --cluster-pipeline 
                 --cluster-threshold 
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id 
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from 
                 --cluster-copy
                 --cluster-replace
  backup         host:port backup_directory
  help           

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

注: slave-read-only集群模式下无效, 在 Cluster模式下, 从节点默认处于冷备的状态, 不提供读写服务. 如需要可以通过客户端开启从节点的 readonly
Linux的打开文件最大数, 建议设置为65535
禁用 Linux的内存管理系统(ransparent Huge Pages, THP), 因为 THP会造成内存锁影响数据库性能
Cluster集群只支持 database 0

Spring boot Lettuce环境配置


# Redis密码
spring.redis.password=
# Redis超时时间
spring.redis.timeout=6000ms
# 连接池最大连接数(-1表示不限制), 默认为8
spring.redis.lettuce.pool.max-active=1000
# 连接池最大阻塞等待时间(-1表示不限制, 单位毫秒ms), 默认为-1
spring.redis.lettuce.pool.max-wait=-1
# 连接池中的最大空闲连接, 默认为8
spring.redis.lettuce.pool.max-idle=200
# 连接池中的最小空闲连接, 默认为0
spring.redis.lettuce.pool.min-idle=100
# 关闭连接前等待任务处理完成的最长时间, 默认为100ms
spring.redis.lettuce.shutdown-timeout=100ms
# 集群所有节点, 各节点按(,)号分割
spring.redis.cluster.nodes=127.0.0.1:6379,127.0.0.1:6380,127.0.0.1:6381,127.0.0.1:6382,127.0.0.1:6383,127.0.0.1:6384
# 当获取节点失败时, 最大重定向次数 
spring.redis.cluster.max-redirects=3

Redis工具类参考

  • https://blog.csdn.net/qcl108/article/details/107052239

如果您觉得有帮助,欢迎点赞哦 ~ 谢谢!!

你可能感兴趣的:(Redis,数据库,基础知识)