1.搭建redis的前提工作
(1)redis编译依赖gcc环境,需安装
yum install gcc-c++
如若不知否安装过gcc 用以下命令查看
rpm -qa|grep gcc
(2)redis安装包下载
redis3.2.8下载地址
2. 安装redis
cd /tmptar -zxvf
redis-3.2.8.tar.gz -C /usr/local
将安装后的文件夹移动到usr目录下的local下,为了方便以后操作,将redis文件夹改一下名字
cd /usr/local
mv redis-3.2.8 redis
进行redis的安装
cd redis
make&&make install
创建集群需要的文件夹
mkdir-p /usr/local/redis/cluster
cd/usr/local/redis/cluster
mkdir {16001,16002,16003,16004,16005,16006,log}
复制配置文件 位于redis文件夹下
cp/usr/local/redis/redis.conf/usr/local/redis/cluster/16001
修改配置文件
vim/usr/local/redis/clusetr/16001.conf
修改其中几项的配置
bind 192.168.42.81
port 16001
daemonize yes
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 5000
appendonly yes
logfile "/usr/local/redis/cluster/log/redis-16001.log"
bind 为服务器ip地址,可用ifconfig -a 查看
port 为运行redis实例的端口
daemonize redis实例后台运行
cluster-enabled 开启集群
cluster-config-file redis运行时保存配置的文件,我们不用修改这个文件
cluster-node-timeout 节点超时多久则认为它宕机了
appendonly 开启aof持久化模式
将刚才的配置文件分别复制到其他5个文件夹,修改端口和log文件位置
分别启动redis实例
cd/usr/local/redis/cluster/16001
redis-server redis.conf
启动之后查看启动状态
[root@198~]# pstree -p | grep redis |-redis-server(22131)-+-{redis-server}(22132)| `-{redis-server}(22133)|-redis-server(22350)-+-{redis-server}(22351)| `-{redis-server}(22352)|-redis-server(22369)-+-{redis-server}(22370)| `-{redis-server}(22371)|-redis-server(22406)-+-{redis-server}(22407)| `-{redis-server}(22408)|-redis-server(22437)-+-{redis-server}(22438)| `-{redis-server}(22439)|-redis-server(22463)-+-{redis-server}(22464)| `-{redis-server}(22465)
6个节点都正常启动的就成功了,如果有没启动的,查看日志看是什么错误,一般是端口被占用了。
3.redis集群搭建
因为redis.trib管理脚本依赖Ruby和RubyGems
yum install ruby
yum install ruby gems
因为可能链接不上gem服务器 需要手动下载redis-gem文件http://download.csdn.net/detail/ltr15036900300/9172823
cd /tmp
gem install redis-3.2.1.gem
安装完这些必须的组件就可以建立集群了
cd /usr/local/redis/src
redis-trib.rb create --replicas1 192.168.42.81:16001 192.168.42.81:16002 192.168.42.81:16003 192.168.42.81:16004 192.168.42.81:16005 192.168.42.81:16006
>>> Creating cluster>>> Performing hash slots allocation on 6 nodes...Using 3 masters:192.168.42.81:16001192.168.42.81:16002192.168.42.81:16003Adding replica192.168.42.81:16004 to192.168.42.81:16001Adding replica192.168.42.81:16005 to192.168.42.81:16002Adding replica192.168.42.81:16006 to192.168.42.81:16003M:881e79832cca2169558a0e19b0fbbbcb9798736e192.168.42.81:16001 slots:0-5460 (5461 slots) masterM: c7840582e5cfe2acea4ffe0d2f59214b31cd43d5 192.168.42.81:16002slots:5461-10922(5462 slots) masterM:2d1418d340bc25ada8b095f06f225c2ebf4fb9dd192.168.42.81:16003 slots:10923-16383(5461 slots) masterS: e033ed4a2f1150bfdb677f1d8bbb5d3563531aac192.168.42.81:16004 replicates881e79832cca2169558a0e19b0fbbbcb9798736eS: 32c54acde641e9b20847024aa9ef33ebcf8c7507 192.168.42.81:16005 replicates c7840582e5cfe2acea4ffe0d2f59214b31cd43d5S: f859ccfdf5b55e4bc0436b646c832db990213cc5 192.168.42.81:16006replicates2d1418d340bc25ada8b095f06f225c2ebf4fb9ddCan I set the above configuration? (type 'yes' to accept): yes>>> Nodes configuration updated>>> Assign a different config epoch to each node>>> Sending CLUSTER MEET messages to join the clusterWaiting for the cluster to join.....>>> Performing Cluster Check (using node192.168.42.81:16001)M:881e79832cca2169558a0e19b0fbbbcb9798736e192.168.42.81:16001 slots:0-5460 (5461 slots) master 1 additional replica(s)M: c7840582e5cfe2acea4ffe0d2f59214b31cd43d5 192.168.42.81:16002slots:5461-10922(5462 slots) master 1 additional replica(s)S: e033ed4a2f1150bfdb677f1d8bbb5d3563531aac192.168.42.81:16004 slots: (0 slots) slave replicates881e79832cca2169558a0e19b0fbbbcb9798736eM:2d1418d340bc25ada8b095f06f225c2ebf4fb9dd192.168.42.81:16003 slots:10923-16383(5461 slots) master 1 additional replica(s)S: f859ccfdf5b55e4bc0436b646c832db990213cc5 192.168.42.81:16006slots: (0 slots) slave replicates2d1418d340bc25ada8b095f06f225c2ebf4fb9ddS: 32c54acde641e9b20847024aa9ef33ebcf8c7507 192.168.42.81:16005 slots: (0 slots) slave replicates c7840582e5cfe2acea4ffe0d2f59214b31cd43d5[OK] All nodes agree about slots configuration.>>> Check for open slots...>>> Check slots coverage...[OK] All 16384 slots covered.
可以检查一下各个节点的状况
redis-trib.rbcheck192.168.42.81:16001
>>>PerformingClusterCheck(usingnode192.168.42.81:16001)M: 881e79832cca2169558a0e19b0fbbbcb9798736e192.168.42.81:16001slots:0-5460(5461slots)master1additionalreplica(s)M:c7840582e5cfe2acea4ffe0d2f59214b31cd43d5192.168.42.81:16002slots:5461-10922(5462slots)master1additionalreplica(s)S:e033ed4a2f1150bfdb677f1d8bbb5d3563531aac192.168.42.81:16004slots: (0slots)slavereplicates881e79832cca2169558a0e19b0fbbbcb9798736eM: 2d1418d340bc25ada8b095f06f225c2ebf4fb9dd192.168.42.81:16003slots:10923-16383(5461slots)master1additionalreplica(s)S:f859ccfdf5b55e4bc0436b646c832db990213cc5192.168.42.81:16006slots: (0slots)slavereplicates2d1418d340bc25ada8b095f06f225c2ebf4fb9ddS: 32c54acde641e9b20847024aa9ef33ebcf8c7507192.168.42.81:16005slots: (0slots)slavereplicatesc7840582e5cfe2acea4ffe0d2f59214b31cd43d5[OK]Allnodesagreeaboutslotsconfiguration.>>>Checkforopenslots...>>>Checkslotscoverage...[OK]All16384slotscovered.
redis进入集群
redis-cli -h192.168.42.81-p16001-c
做一个简单的测试
192.168.42.81:16001>settest111-> Redirected to slot [6918] located at192.168.42.81:16002OK192.168.42.81:16002> get test"111"
集群搭建成功!