Kubernetes-redis-cluster集群部署

1.参考文档

    https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/

    https://blog.csdn.net/liuyunshengsir/article/details/88877427

    https://www.jianshu.com/p/65c4baadf5d9

2.基础环境 (服务器IP:172.17.0.51,52,53)

a.系统版本        CentOS Linux release 7.6.1810 (Core)

b.kubernetes版本  

    kubernetes-server-linux-amd64(v1.13.1)

    kubernetes-node-linux-amd64(v1.13.1)

    kubernetes-client-linux-amd64(v1.13.1)

c.Redis 版本    5.0.5

3.安装kubernetes redis cluster集群

  说明:

    本次搭建使用6个redis节点,3个主节点3个从节点,分别对应6个PV,PVC

    Docker镜像使用的是: redis:alpine  (docker pull redis:alpine 即可获取)

    Redis安装包下载地址:https://redis.io/download

(1)创建NFS存储

创建NFS存储主要是为了给Redis提供稳定的后端存储,当Redis的Pod重启或迁移后,依然能获得原先的数据。通过使用PV为Redis挂载一个远程的NFS存储。

yum install nfs-utils rpcbind  安装NFS包

编辑/etc/exports,增加如下内容:

/jixson/nfs/pv1 *(rw,no_root_squash)

/jixson/nfs/pv2 *(rw,no_root_squash)

/jixson/nfs/pv3 *(rw,no_root_squash)

/jixson/nfs/pv4 *(rw,no_root_squash)

/jixson/nfs/pv5 *(rw,no_root_squash)

/jixson/nfs/pv6 *(rw,no_root_squash)

        systemctl enable nfs && systemctl start nfs  启动 NFS

(2)创建PV

apiVersion: v1

kind: PersistentVolume

metadata:

  name: redis-pv1

  labels:

    app: redis-pv1

spec:

  capacity:

    storage: 300M

  accessModes:

    - ReadWriteOnce

  persistentVolumeReclaimPolicy: Retain

  nfs:

    server: 172.17.0.51

    path: "/jixson/nfs/pv1"

        其中字段name,app,path,分别修改成对应的pv1-pv6

(3)创建PVC

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: redis-pvc1

  labels:

    app: redis-pvc1

spec:

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 300M

  selector:

    matchLabels:

      app: redis-pv1

        其中字段name,app,matchLabels,分别修改成对应的pvc1-pcv6,pv1-pv6

(4)创建ConfigMap

解压下载的redis安装包,获取redis.conf模板文件

    kubectl create configmap redis-conf --from-file=redis.conf  导入configmap

    configmap配置内容如下:

# Please edit the object below. Lines beginning with a '#' will be ignored,

# and an empty file will abort the edit. If an error occurs while saving this file will be

# reopened with the relevant failures.

#

apiVersion: v1

data:

  redis.conf: |

    # Redis configuration file example.

    #

    # Note that in order to read the configuration file, Redis must be

    # started with the file path as first argument:

    #

    # ./redis-server /path/to/redis.conf

    # Note on units: when memory size is needed, it is possible to specify

    # it in the usual form of 1k 5GB 4M and so forth:

    #

    # 1k => 1000 bytes

    # 1kb => 1024 bytes

    # 1m => 1000000 bytes

    # 1mb => 1024*1024 bytes

    # 1g => 1000000000 bytes

    # 1gb => 1024*1024*1024 bytes

    #

    # units are case insensitive so 1GB 1Gb 1gB are all the same.

    bind 0.0.0.0

    protected-mode no

    port 6379

    tcp-backlog 511

    timeout 0

    tcp-keepalive 300

    supervised no

    loglevel notice

    logfile /data/redis.log

    databases 5

    always-show-logo no

    #save 900 1

    #save 300 10

    #save 60 10000

    stop-writes-on-bgsave-error yes

    rdbcompression yes

    rdbchecksum yes

    dbfilename dump.rdb

    dir /data

    replica-serve-stale-data yes

    replica-read-only yes

    repl-diskless-sync no

    repl-diskless-sync-delay 5

    repl-disable-tcp-nodelay no

    replica-priority 100

    lazyfree-lazy-eviction no

    lazyfree-lazy-expire no

    lazyfree-lazy-server-del no

    replica-lazy-flush no

    appendonly no

    appendfilename "appendonly.aof"

    appendfsync everysec

    no-appendfsync-on-rewrite no

    auto-aof-rewrite-percentage 100

    auto-aof-rewrite-min-size 64mb

    aof-load-truncated yes

    aof-use-rdb-preamble yes

    lua-time-limit 5000

    slowlog-log-slower-than 10000

    slowlog-max-len 128

    latency-monitor-threshold 0

    notify-keyspace-events ""

    hash-max-ziplist-entries 512

    hash-max-ziplist-value 64

    list-max-ziplist-size -2

    list-compress-depth 0

    set-max-intset-entries 512

    zset-max-ziplist-entries 128

    zset-max-ziplist-value 64

    hll-sparse-max-bytes 3000

    stream-node-max-bytes 4096

    stream-node-max-entries 100

    activerehashing yes

    client-output-buffer-limit normal 0 0 0

    client-output-buffer-limit replica 256mb 64mb 60

    client-output-buffer-limit pubsub 32mb 8mb 60

    hz 10

    dynamic-hz yes

    aof-rewrite-incremental-fsync yes

    rdb-save-incremental-fsync yes

    cluster-enabled yes

    cluster-node-timeout 5000

    cluster-config-file nodes-6379.conf

    cluster-announce-ip MY_POD_IP

    cluster-announce-port 6379

    cluster-announce-bus-port 16379

    cluster-require-full-coverage no

kind: ConfigMap

metadata:

  creationTimestamp: "2019-07-18T07:28:05Z"

  name: redis-conf

  namespace: default

  resourceVersion: "3060436"

  selfLink: /api/v1/namespaces/default/configmaps/redis-conf

  uid: 9583956d-a92d-11e9-8a85-44a84226c870

        其中cluster-announce-ip  MY_POD_IP,MY_POD_IP表示的是运行redis的容器IP,此IP可以在容器启动时自动获取,然后通过变量替换

(5)创建POD yaml文件

apiVersion: v1

kind: Pod

metadata:

  name: redis01

  namespace: default

  labels:

    app: cl01-redis

spec:

  containers:

  - name: redis01

    image: redis:alpine

    imagePullPolicy: IfNotPresent

    command: [ "/bin/sh", "-c" ]

    args: [ "cp /usr/local/etc/redis/redis.conf /data/redis.conf;echo 'sed -i \'s/MY_POD_IP/${MY_POD_IP}/g\' /data/redis.conf' > /data/tmp.sh;sh /data/tmp.sh;redis-server /data/redis.conf" ]

    env:

      - name: MY_POD_IP

        valueFrom:

          fieldRef:

            fieldPath: status.podIP

    ports:

    - name: redis

      containerPort: 6379

      protocol: TCP

    - name: cluster

      containerPort: 16379

      protocol: TCP

    resources:

      limits:

        cpu: 200m

        memory: 200Mi

    volumeMounts:

    - name: redis-conf

      mountPath: /usr/local/etc/redis

    - name: redis-data

      mountPath: /data

  volumes:

    - name: redis-conf

      configMap:

        name: redis-conf

        defaultMode: 0755

    - name: redis-data

      persistentVolumeClaim:

        claimName: redis-pvc1

  dnsPolicy: None

        其中,字段name替换为redis的六个节点, redis01-redis06

(6)创建service,用来代理redis集群访问

apiVersion: v1

kind: Service

metadata:

  name: redis-svc

  namespace: default

  labels:

    name: redis-svc

spec:

  type: NodePort

  ports:

  - port: 6379

    targetPort: 6379

  selector:

    app: cl01-redis

(7)启动PV,PVC,POD

kubectl apply -f  pv/*

kubectl apply -f  pvc/*

kubectl apply -f  pod/*

获取6个redis节点的IP,kubectl describe pods redis |grep ^IP:

IP:                10.88.84.4

IP:                10.88.69.4

IP:                10.88.84.5

IP:                10.88.69.5

IP:                10.88.84.2

IP:                10.88.69.6

(8)创建redis集群

额外启动一个centos容器,用来创建redis集群

docker run -d -ti centos  /bin/bash

登录容器后,安装redis客户端

yum install -y redis

创建集群

        redis-cli --cluster create 10.88.84.2:6379 10.88.84.4:6379 10.88.84.5:6379 10.88.69.6:6379 10.88.69.5:6379 10.88.69.4:6379 --cluster-replicas 1

根据提示输入yes,即可完成集群创建

(9)访问redis集群

[root@node4051 test-yaml]# kubectl get service |grep redis-svc

redis-svc    NodePort    10.254.220.30          6379:47744/TCP  18h


          此次服务暴露采用的nodeport方式,通过nodeIP:47744即可访问redis集群

你可能感兴趣的:(Kubernetes-redis-cluster集群部署)