redis多个redis节点网络互联,数据共享所有的节点都是一主一从(可以是多个从),其中从不提供服务,仅作为备用
不支持同时处理多个键(如mset/mget),因为redis需要把键均匀分布在各个节点上,并发量很高的情况下同时创建键值会降低性能并导致不可预测的行为。
支持在线增加、删除节点
客户端可以连任何一个主节点进行读写
实验环境
centos6.9_x64
redis_master 192.168.0.14 6379/6380/6381/ 虚拟节点
reids_slave 192.168.0.15 6382/6383/6384/ 虚拟节点
实验软件
redis-4.0.8.tar.gz
软件安装
yum install -y wget lrzsz make gcc gcc-c++
yum install centos-release-scl-rh
ls /etc/yum.repos.d/CentOS-SCLo-scl-rh.repo
/etc/yum.repos.d/CentOS-SCLo-scl-rh.repo
yum install -y rh-ruby23
scl enable rh-ruby23 bash
ruby -v
ruby 2.3.8p459 (2018-10-18 revision 65136) [x86_64-linux-gnu]
gem install redis redis服务端操作
tar zxvf /root/redis-4.0.8.tar.gz
cd /root/redis-4.0.8
make && make install PREFIX=/usr/local/redis
ll /usr/local/redis/
bin
cluster
redis-trib.rb
mkdir -pv /usr/local/redis/cluster
cp -pv /root/redis-4.0.8/src/redis-trib.rb /usr/local/redis/
cp -pv /root/redis-4.0.8/redis.conf /usr/local/redis/cluster/6379.conf 6379/6380/6381相同配置
echo > /usr/local/redis/cluster/6379.conf 6379/6380/6381相同配置
touch /var/log/redis.log
cat /usr/local/redis/cluster/6379.conf 6379/6380/6381配置文件修改内容
bind 127.0.0.1 192.168.0.14
protected-mode yes
port 6379
daemonize yes
appendonly yes
appendfsync everysec
pidfile /var/run/redis_6379.pid
dbfilename dump_6379.rdb
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 5000
cluster-slave-validity-factor 10
#cluster-migration-barrier 1
cluster-require-full-coverage yes
logfile "/var/log/redis.log"
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6379.conf & 启动服务
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6380.conf & 启动服务
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6381.conf & 启动服务
netstat -tuplna | grep redis
tcp 0 0 192.168.0.14:6379 0.0.0.0:* LISTEN 9989/redis-server 1
tcp 0 0 192.168.0.14:6380 0.0.0.0:* LISTEN 9994/redis-server 1
tcp 0 0 192.168.0.14:6381 0.0.0.0:* LISTEN 9999/redis-server 1
ps -ef | grep redis
root 9989 1 0 14:58 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.14:6379 [cluster]
root 9994 1 0 14:58 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.14:6380 [cluster]
root 9999 1 0 14:58 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.14:6381 [cluster] 以上操作为redis_master端操作
scp /root/redis-4.0.8.tar.gz [email protected]:/root/
scp /usr/local/redis/cluster/6379.conf [email protected]:/usr/local/redis/cluster/6382.conf 6382/6382/6384相同配置
tar zxvf /root/redis-4.0.8.tar.gz
cd /root/redis-4.0.8
make && make install PREFIX=/usr/local/redis
ll /usr/local/redis/
bin
cluster
redis-trib.rb
mkdir -pv /usr/local/redis/cluster
cp -pv /root/redis-4.0.8/redis.conf /usr/local/redis/cluster/6382.conf
echo > /usr/local/redis/cluster/6381.conf
echo > /usr/local/redis/cluster/6382.conf
cat /usr/local/redis/cluster/6382.conf 6382/63843/6382相同配置
bind 192.168.0.15
protected-mode yes
port 6382
daemonize yes
pidfile /var/run/redis_6382.pid
dbfilename dump_6382.rdb
cluster-enabled yes
cluster-config-file nodes-6382.conf
cluster-node-timeout 5000
cluster-slave-validity-factor 10
#cluster-migration-barrier 1
cluster-require-full-coverage yes
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6382.conf & 启动服务
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6383.conf &
/usr/local/redis/bin/redis-server /usr/local/redis/cluster/6384.conf &
netstat -tuplna | grep redis
tcp 0 0 192.168.0.15:6382 0.0.0.0:* LISTEN 7165/redis-server 1
tcp 0 0 192.168.0.15:6383 0.0.0.0:* LISTEN 7158/redis-server 1
tcp 0 0 192.168.0.15:6384 0.0.0.0:* LISTEN 7170/redis-server 1
ps -ef | grep redis
root 7158 1 0 20:55 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.15:6383 [cluster]
root 7165 1 0 20:56 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.15:6382 [cluster]
root 7170 1 0 20:56 ? 00:00:00 /usr/local/redis/bin/redis-server 192.168.0.15:6384 [cluster]
/usr/local/redis/redis-trib.rb create --replicas 1 192.168.0.14:6379 192.168.0.14:6380 192.168.0.14:6381 192.168.0.15:6382 192.168.0.15:6383 192.168.0.15:6384 redis_master端操作,创建群集少于6个节点会报错
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.0.14:6379
192.168.0.15:6382
192.168.0.14:6380
Adding replica 192.168.0.15:6384 to 192.168.0.14:6379
Adding replica 192.168.0.14:6381 to 192.168.0.15:6382
Adding replica 192.168.0.15:6383 to 192.168.0.14:6380
M: 083354570b0596e14e474554d28a6f1cb2e567c8 192.168.0.14:6379
slots:0-5460 (5461 slots) master
M: ce932aef7c6fbeb6571898c6923c55032067b43b 192.168.0.14:6380
slots:10923-16383 (5461 slots) master
S: 168f68a61b035be5fab4dfe9a109a2e1c472c7da 192.168.0.14:6381
replicates c4305af229ddeb679c4783da856bfb2f6f1d4b38
M: c4305af229ddeb679c4783da856bfb2f6f1d4b38 192.168.0.15:6382
slots:5461-10922 (5462 slots) master
S: b3e868782e1c5142c9c30317f214741aa380bfdd 192.168.0.15:6383
replicates ce932aef7c6fbeb6571898c6923c55032067b43b
S: 7897e6cd7f5de220fa7f9248d617254405e408dc 192.168.0.15:6384
replicates 083354570b0596e14e474554d28a6f1cb2e567c8
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.0.14:6379)
M: 083354570b0596e14e474554d28a6f1cb2e567c8 192.168.0.14:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 168f68a61b035be5fab4dfe9a109a2e1c472c7da 192.168.0.14:6381
slots: (0 slots) slave
replicates c4305af229ddeb679c4783da856bfb2f6f1d4b38
S: b3e868782e1c5142c9c30317f214741aa380bfdd 192.168.0.15:6383
slots: (0 slots) slave
replicates ce932aef7c6fbeb6571898c6923c55032067b43b
M: ce932aef7c6fbeb6571898c6923c55032067b43b 192.168.0.14:6380
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 7897e6cd7f5de220fa7f9248d617254405e408dc 192.168.0.15:6384
slots: (0 slots) slave
replicates 083354570b0596e14e474554d28a6f1cb2e567c8
M: c4305af229ddeb679c4783da856bfb2f6f1d4b38 192.168.0.15:6382
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/usr/local/redis/bin/redis-cli -c -h 192.168.0.14 redis_master端操作,查看群集状态
192.168.0.14:6379> cluster nodes
168f68a61b035be5fab4dfe9a109a2e1c472c7da 192.168.0.14:6381@16381 slave c4305af229ddeb679c4783da856bfb2f6f1d4b38 0 1570777795825 4 connected
b3e868782e1c5142c9c30317f214741aa380bfdd 192.168.0.15:6383@16383 slave ce932aef7c6fbeb6571898c6923c55032067b43b 0 1570777795826 5 connected
ce932aef7c6fbeb6571898c6923c55032067b43b 192.168.0.14:6380@16380 master - 0 1570777795321 2 connected 10923-16383
7897e6cd7f5de220fa7f9248d617254405e408dc 192.168.0.15:6384@16384 slave 083354570b0596e14e474554d28a6f1cb2e567c8 0 1570777795523 6 connected
083354570b0596e14e474554d28a6f1cb2e567c8 192.168.0.14:6379@16379 myself,master - 0 1570777795000 1 connected 0-5460
c4305af229ddeb679c4783da856bfb2f6f1d4b38 192.168.0.15:6382@16382 master - 0 1570777794315 4 connected 5461-10922
192.168.0.14:6379>
/usr/local/redis/redis-trib.rb add-node --slave 192.168.0.15:6388 192.168.0.15:6389 添加新slave节点
/usr/local/redis/redis-trib del-node 192.168.0.15:6388 移除节点