操作系统:CentOS Linux release 7.2.1511 (Core)
GlusterFS: 4.1
redis:4.9.105
# docker pull redis:5.0-rc5-alpine3.8 // 官方镜像
# docker pull racccosta/redis // redis集群构建工具
docker版本
# docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built:
OS/Arch: linux/amd64
每个Master都可以拥有多个Slave。当Master下线后,Redis集群会从多个Slave中选举出一个新的Master作为替代,而旧Master重新上线后变成新Master的Slave。
打算创建一个6节点的Redis集群,所以共享了6个目录。
#!/bin/bash
for loop in 0 1 2 3 4 5
do
gluster volume create redis-vol-$loop replica 3 paasm1:/dcos/redis-brick/pv$loop paasm2:/dcos/redis-brick/pv$loop paashar:/dcos/redis-brick/pv$loop;
gluster volume start redis-vol-$loop;
done
# vi glusterfs-endpoints.yml
---
kind: Endpoints
apiVersion: v1
metadata:
name: glusterfs-cluster
namespace: kube-system
subsets:
- addresses:
- ip: 10.142.71.120
ports:
- port: 7096
- addresses:
- ip: 10.142.71.121
ports:
- port: 7096
- addresses:
- ip: 10.142.71.123
ports:
- port: 7096
# kubectl apply -f glusterfs-endpoints.yml
# vi glusterfs-service.yml
---
kind: Service
apiVersion: v1
metadata:
name: glusterfs-cluster
namespace: kube-system
spec:
ports:
- port: 7096
# kubectl apply -f glusterfs-service.yml
每一个Redis Pod都需要一个独立的PV来存储自己的数据,因此包含6个PV:
# vi glusterfs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv000
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-0"
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-1"
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-2"
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-3"
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-4"
readOnly: false
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
namespace: kube-system
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "redis-vol-5"
readOnly: false
# kubectl apply -f glusterfs-pv.yml
# vi redis-service.yml
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: kube-system
labels:
app: redis
spec:
ports:
- name: redis-port
port: 6379
clusterIP: None
selector:
app: redis
appCluster: redis-cluster
# kubectl apply -f redis-service.yml
# vi redis.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-app
namespace: kube-system
spec:
serviceName: "redis-service"
replicas: 6
template:
metadata:
labels:
app: redis
appCluster: redis-cluster
spec:
terminationGracePeriodSeconds: 20
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- redis
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: hub.cmss.com:5000/registry.paas/library/redis:5.0
command:
- "redis-server"
args:
- "--protected-mode"
- "no"
- "--cluster-enabled"
- "yes"
- "--appendonly"
- "yes"
resources:
requests:
cpu: "100m"
memory: "100Mi"
ports:
- name: redis
containerPort: 6379
protocol: "TCP"
- name: cluster
containerPort: 16379
protocol: "TCP"
volumeMounts:
- name: "redis-data"
mountPath: "/data"
volumeClaimTemplates:
- metadata:
name: redis-data
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 2Gi
# kubectl apply -f redis.yml
总共创建了6个Redis节点(Pod),其中3个将用于master,另外3个分别作为master的slave
Redis的数据存储路径使用volumeClaimTemplates声明, 其会绑定到我们先前创建的PV上
# kubectl get pv -n kube-system
可以看到之前创建的6个PV都被绑定
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv000 2Gi RWX Retain Bound kube-system/redis-data-redis-app-2 1h
pv001 2Gi RWX Retain Bound kube-system/redis-data-redis-app-3 1h
pv002 2Gi RWX Retain Bound kube-system/redis-data-redis-app-4 1h
pv003 2Gi RWX Retain Bound kube-system/redis-data-redis-app-5 1h
pv004 2Gi RWX Retain Bound kube-system/redis-data-redis-app-0 1h
pv005 2Gi RWX Retain Bound kube-system/redis-data-redis-app-1 1h
为了方便redis-trib的使用, 可以创建 /usr/local/bin/redis-trib 文件
#!/bin/bash
/usr/bin/docker run -it --privileged --rm \
--net=host \
redis-trib:latest \
"$@"
# chmod 777 /usr/local/bin/redis-trib
直接使用
# redis-trib
Usage: redis-trib
create host1:port1 ... hostN:portN
--replicas
check host:port
info host:port
fix host:port
--timeout
reshard host:port
--from
--to
--slots
--yes
--timeout
--pipeline
rebalance host:port
--weight
--auto-weights
--use-empty-masters
--timeout
--simulate
--pipeline
--threshold
add-node new_host:new_port existing_host:existing_port
--slave
--master-id
del-node host:port node_id
set-timeout host:port milliseconds
call host:port command arg arg .. arg
import host:port
--from
--copy
--replace
help (show this help)
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
查看起的应用
# kubectl get po -n kube-system -o wide |grep redis
redis-app-0 1/1 Running 0 53s 10.222.98.88 paasm3
redis-app-1 1/1 Running 0 50s 10.222.88.205 paasm2
redis-app-2 1/1 Running 0 47s 10.222.65.206 paasm1
redis-app-3 1/1 Running 0 45s 10.222.92.210 paasing
redis-app-4 1/1 Running 0 41s 10.222.66.47 paashar
redis-app-5 1/1 Running 0 38s 10.222.88.206 paasm2
利用redis-trib创建集群
添加 master 节点
# redis-trib create --replicas 1 \
10.222.98.90:6379 \
10.222.88.208:6379 \
10.222.65.208:6379 \
10.222.92.213:6379 \
10.222.66.49:6379 \
10.222.88.209:6379
replicas 1的意思,就是每个节点创建1个副本(即:slave)
前面我们创建了用于实现StatefulSet的Headless Service,但该Service没有Cluster Ip,因此不能用于外界访问。所以,我们还需要创建一个Service,专用于为Redis集群提供访问和负载均:
# vi redis-access-service.yml
apiVersion: v1
kind: Service
metadata:
name: redis-access-service
namespace: kube-system
labels:
app: redis
spec:
ports:
- name: redis-port
protocol: "TCP"
port: 6379
targetPort: 6379
selector:
app: redis
appCluster: redis-cluster
# kubectl apply -f redis-access-service.yml
# kubectl get svc -n kube-system |grep redis-access
redis-access-service ClusterIP 10.233.11.2 6379/TCP 16h
此时在k8s集群中可以通过10.233.11.2:6379 访问redis
检查集群状态
# redis-trib check 10.233.11.2:6379
M: 7a2984e16a9911296918822eea625e9d69e5d062 10.233.11.2:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: 38e2c1e93aa3fc792b0c1ae4c7c9fe71a6713a0f 10.222.66.49:6379
slots: (0 slots) slave
replicates 7a2984e16a9911296918822eea625e9d69e5d062
S: 9f13a91cdc81f102f1f05ef4ac7ecfd7019ea1ca 10.222.88.209:6379
slots: (0 slots) slave
replicates fe586bc12346bd26548f9dc909ff9c2309b6db0a
M: fe586bc12346bd26548f9dc909ff9c2309b6db0a 10.222.65.208:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 7d8f97ab405c1cb281f19feaba439b05f7e51709 10.222.92.213:6379
slots: (0 slots) slave
replicates c275b4cd089b000c70bcf21f6fb6f6a6f3dbb0a3
M: c275b4cd089b000c70bcf21f6fb6f6a6f3dbb0a3 10.222.98.90:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
参考: https://www.jianshu.com/p/65c4baadf5d9