k8s 以statefulset方式部署zookeeper集群

k8s 以statefulset方式部署zookeeper集群

参考 k8s官网zookeeper集群的部署,数据挂着方式改成通过本地方式创建的pv; https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/

1、zookeeper镜像

镜像使用 k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10,这个镜像制作的时候包含了一些zookeeper的启动脚本,可以直接使用这个镜像。

2、创建pv

我们部署的有三个节点,创建对应的pv,pv-zk.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv-zk1
    annotations:
      volume.beta.kubernetes.io/storage-class: "anything"     #对应的pv class 名
    labels:
      type: local
spec:
    capacity:
      storage: 2Gi
    accessModes:
      - ReadWriteOnce
    hostPath:
      path: "/opt/data/zookeeper"             #挂载的本地目录
    persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv-zk2
    annotations:
      volume.beta.kubernetes.io/storage-class: "anything"
    labels:
      type: local
spec:
    capacity:
      storage: 2Gi
    accessModes:
      - ReadWriteOnce
    hostPath:
      path: "/opt/data/zookeeper"              #挂载的本地目录
    persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
    name: pv-zk3
    annotations:
      volume.beta.kubernetes.io/storage-class: "anything"
    labels:
      type: local
spec:
    capacity:
      storage: 2Gi
    accessModes:
      - ReadWriteOnce
    hostPath:
      path: "/opt/data/zookeeper"
    persistentVolumeReclaimPolicy: Recycle

通过 kubectl create -f pv-zk.yaml命令创建pv

使用kubectl get pv 可以查看创建好的三个pv

3、k8s服务启动配置文件

k8s-zk.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady   #注意这里使用OrderedReady顺序启动方式,Parallel方式可能会报连接错误的问题
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"   #这个要和创建的pv对应
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi

通过kubectl create -f k8s-zk.yaml命令启动pod;

通过kubectl get pods可以查看启动的pod;

4、检查zookeeper集群状态

for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done

可以看到一个leader,两个follower

你可能感兴趣的:(kafka&logstash)