最近遇到部分系统因为redis服务挂掉,导致部分服务不可用。所以希望搭建一个redis集群镜像,把原先散落各处的redis服务器统一管理起来,并且保障高可用和故障自动迁移。
一:redis集群分类
大家都知道redis集群有两种,一种是redis sentinel,高可用集群,同时只有一个master,各实例数据保持一致;一种是redis cluster,分布式集群,同时有多个master,数据分片部署在各个master上。基于我们的需求和redis本身技术的成熟度,本次要搭建的是redis sentinel。
关于它的介绍:
Redis 的 Sentinel 系统用于管理多个 Redis 服务器(instance), 该系统执行以下三个任务:
- 监控(Monitoring): Sentinel 会不断地检查你的主服务器和从服务器是否运作正常。
- 提醒(Notification): 当被监控的某个 Redis 服务器出现问题时, Sentinel 可以通过 API 向管理员或者其他应用程序发送通知。
- 自动故障迁移(Automatic failover): 当一个主服务器不能正常工作时, Sentinel 会开始一次自动故障迁移操作, 它会将失效主服务器的其中一个从服务器升级为新的主服务器, 并让失效主服务器的其他从服务器改为复制新的主服务器; 当客户端试图连接失效的主服务器时, 集群也会向客户端返回新主服务器的地址, 使得集群可以使用新主服务器代替失效服务器。
二:制作镜像
整个集群可以分为一个master,N个slave,M个sentinel,本次以2个slave和3个sentinel为例:
首先增加redis.conf
##redis.conf
##redis-0,默认为master
port $redis_port
##授权密码,请各个配置保持一致
##暂且禁用指令重命名
##rename-command ##开启AOF,禁用snapshot appendonly yes #slaveof redis-master $master_port slave-read-only yes
默认为master,#slaveof
注释去掉后变为slave,这里固化了master的域名redis-master
。
增加sentinel.conf
port $sentinel_port
dir "/tmp"
##sentinel监控的redis的名字、IP和端口,最后一个数字是sentinel做决策的时候需要投赞同票的最少的sentinel的数量。
sentinel monitor mymaster redis-master $master_port 2
##选项指定了在执行故障转移时, 最多可以有多少个从服务器同时对新的主服务器进行同步, 这个数字越小, 完成故障转移所需的时间就越长。
sentinel config-epoch mymaster 1
sentinel leader-epoch mymaster 1
sentinel current-epoch 1
增加启动脚本,根据入参判断启动master,slave,sentinel
cd /data
redis_role=$1
echo $redis_role
if [ $redis_role = "master" ] ; then echo "master" sed -i "s/\$redis_port/$redis_port/g" redis.conf redis-server /data/redis.conf elif [ $redis_role = "slave" ] ; then echo "slave" sed -i "s/\$redis_port/$redis_port/g" redis.conf sed -i "s/#slaveof/slaveof/g" redis.conf sed -i "s/\$master_port/$master_port/g" redis.conf redis-server /data/redis.conf elif [ $redis_role = "sentinel" ] ; then echo "sentinel" sed -i "s/\$sentinel_port/$sentinel_port/g" sentinel.conf sed -i "s/\$master_port/$master_port/g" sentinel.conf redis-sentinel /data/sentinel.conf else echo "unknow role!" fi #ifend
其中$redis_port和$master_port,$sentinel_port都是取自环境变量,通过Docker启动时候传入。
编写Dockerfile
FROM redis:3-alpine
MAINTAINER voidman
COPY Shanghai /etc/localtime COPY redis.conf /data/redis.conf COPY sentinel.conf /data/sentinel.conf COPY start.sh /data/start.sh RUN chmod +x /data/start.sh RUN chown redis:redis /data/* ENTRYPOINT ["sh","/data/start.sh"] CMD ["master"]
选取redis-alpine镜像作为基础镜像,因为它非常小,只有9M,修改时区和把一些配置拷贝进去后,变更下权限和用户组,因为基础镜像是redis用户组。ENTRYPOINT
和CMD
组合,默认以master方式启动。
build完成后,镜像只有15M。
三:启动
采用docker-compose格式:
redis-master-host:
environment:
redis_port: '16379'
labels:
io.rancher.container.pull_image: always
tty: true
image: xxx.aliyun.com:5000/aegis-redis-ha:1.0 stdin_open: true net: host redis-slaves: environment: master_port: '16379' redis_port: '16380' labels: io.rancher.scheduler.affinity:container_label_soft_ne: name=slaves io.rancher.container.pull_image: always name: slaves tty: true command: - slave image: xxx.aliyun.com:5000/aegis-redis-cluster:1.0 stdin_open: true net: host redis-sentinels: environment: master_port: '16379' sentinel_port: '16381' labels: io.rancher.container.pull_image: always name: sentinels io.rancher.scheduler.affinity:container_label_ne: name=sentinels tty: true command: - sentinel image: xxx.aliyun.com:5000/aegis-redis-cluster:1.0 stdin_open: true net: host
首先启动master,传入端口16379,host模式,在启动slave,成为16379 master 的slave,并且设置调度策略为尽可能分散的方式,sentinels也类似。
四:测试
java客户端测试(片段):
//初始化
Set sentinels = new HashSet(16);
sentinels.add("redis-sentinel1.aliyun.com:16381");
sentinels.add("redis-sentinel2.aliyun.com:16381"); sentinels.add("redis-sentinel3.aliyun.com:16381"); GenericObjectPoolConfig config = new GenericObjectPoolConfig(); config.setBlockWhenExhausted(true); config.setMaxTotal(10); config.setMaxWaitMillis(1000l); config.setMaxIdle(25); config.setMaxTotal(32); jedisPool = new JedisSentinelPool("mymaster", sentinels, config);
//不停读写
while (true) {
AegisRedis.set("testSentinel", "ok"); System.err.println(AegisRedis.get("testSentinel")); Thread.sleep(3000); }
sentinel挂掉测试
此时kill掉一台sentinel,会提示:
严重: Lost connection to Sentinel at redis-sentinel2.aliyun.com:16381. Sleeping 5000ms and retrying.
数据正常读写,当把所有sentinel都kill掉后,任然能够正常读写,并且不断在重连sentinel,说明sentinel只是重新选取master和failover时才顶用,一旦选好后,及时全挂了,redis也能照常运行。
而如果这是重新去初始化redisPool的时候,会报错:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: All sentinels down, cannot determine where is mymaster master is running...
sentinel之间不需要相互配置,大家都通过订阅master和slave的sentinel:hello 频道,上报自己的ip,port等信息,然后每个sentinel就都维护了一份已知的sentinel列表。
slave 挂掉测试
此时kill掉一台slave,对客户端没有任何影响,也不会有感知,master会有失联日志:
2016/4/14 下午4:31:336:M 14 Apr 16:31:33.698 # Connection with slave ip_address:16380 lost.
sentinel也有日志:
2016/4/14 下午4:30:397:X 14 Apr 16:30:39.852 # -sdown slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:32:037:X 14 Apr 16:32:03.786 # +sdown slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379
此时恢复那台slave
2016/4/14 下午4:36:579:S 14 Apr 16:36:57.441 * Connecting to MASTER redis-master:16379
2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * MASTER <-> SLAVE sync started 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Non blocking connect for SYNC fired the event. 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Master replied to PING, replication can continue... 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Partial resynchronization not possible (no cached master) 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.450 * Full resync from master: 0505a8e1049095ce597a137ae1161ed4727533d3:84558 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.462 * SLAVE OF ip_address:16379 enabled (user request from 'id=3 addr=ip_address2:57122 fd=10 name=sentinel-11d82028-cmd age=0 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=rw cmd=exec') 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.462 # CONFIG REWRITE executed with success. 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Connecting to MASTER ip_address:16379 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * MASTER <-> SLAVE sync started 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Non blocking connect for SYNC fired the event. 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Master replied to PING, replication can continue... 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Partial resynchronization not possible (no cached master) 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.453 * Full resync from master: 0505a8e1049095ce597a137ae1161ed4727533d3:84721 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: receiving 487 bytes from master 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Flushing old data 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Loading DB in memory 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Finished with success 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.537 * Background append only file rewriting started by pid 12 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.563 * AOF rewrite child asks to stop sending diffs. 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * Parent agreed to stop sending diffs. Finalizing AOF... 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * Concatenating 0.00 MB of AOF diff received from parent. 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * SYNC append only file rewrite performed 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.564 * AOF rewrite: 0 MB of memory used by copy-on-write 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.652 * Background AOF rewrite terminated with success 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.653 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB) 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.653 * Background AOF rewrite finished successfully
马上从master恢复数据,最终保持一致。
挂掉master
此时客户端出现异常:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused
并且sentinel开始发现这个情况,首先主观判断master(ip_address 16379)已经挂了,然后通过询问其他sentinel,是否master挂了,判断得到2个sentinel都认为master挂了(这里的2个为之前sentinel.conf中配置,一般建议选择多余一半的sentinel的个数),此时客观判断master挂了。开始新的一轮master投票,投票给了ip_address:16380,进行failover,完成后切换至新主。并且通知其余slave,有了新主。以下是详细日志:注意的是,再选取过程中,出现了短暂的客户端不可用。
2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.162 # +sdown master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +odown master mymaster ip_address 16379 #quorum 2/2 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +new-epoch 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +try-failover master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.238 # +vote-for-leader 0a632ec0550401e66486846b521ad2de8c345695 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.249 # ip_address2:16381 voted for 0a632ec0550401e66486846b521ad2de8c345695 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.261 # ip_address3:16381 voted for 4e590c09819a793faf1abf185a0d0db07dc89f6a 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.309 # +elected-leader master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.309 # +failover-state-select-slave master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.376 # +selected-slave slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.376 * +failover-state-send-slaveof-noone slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.459 * +failover-state-wait-promotion slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.256 # +promoted-slave slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.256 # +failover-state-reconf-slaves master mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.303 * +slave-reconf-sent slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.288 * +slave-reconf-inprog slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.289 * +slave-reconf-done slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.378 * +slave-reconf-sent slave ip_address2:16380 ip_address2 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.436