kubernetes statefulset 测试(ceph rbd)

简介

本文主要实践了k8s官方文档如下两个statefulset service
https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
https://kubernetes.io/docs/tutorials/stateful-application/run-replicated-stateful-application/

环境配置

kubernetes-1.5.1 docker-1.10.3
集群中所有节点安装ceph客户端
yum install ceph-common -y
modprobe rbd

配置storage-class(ceph rbd 方式)

#kubectl create secret generic ceph-secret-admin --from-literal=key='AQADMr1YXXS01gEIQdg=='  --type=kubernetes.io/rbd
# cat ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
   name: slow
provisioner: kubernetes.io/rbd
parameters:
    monitors: es1-bm0604.synjones.com:6789,es2-bm0606.synjones.com:6789,esr3-bm0608.synjones.com:6789
    adminId: admin
    adminSecretName: ceph-secret-admin
    pool: rbd
    userSecretName: ceph-secret-admin
#kubectl create -f ceph-storageclass.yaml

zookeeper

由于官方文档中的image无法访问和版本问题,zookeeper.yaml文件进行了修改

apiVersion: v1
kind: Service
metadata:
  name: zk-headless
  labels:
    app: zk-headless
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: zk-config
data:
  ensemble: "zk-0;zk-1;zk-2"
  jvm.heap: "1G"
  tick: "2000"
  init: "10"
  sync: "5"
  client.cnxns: "60"
  snap.retain: "3"
  purge.interval: "1"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-budget
spec:
  selector:
    matchLabels:
      app: zk
  minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: zk
spec:
  serviceName: zk-headless
  replicas: 3
  template:
    metadata:
      labels:
        app: zk
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
        scheduler.alpha.kubernetes.io/affinity: >
            {
              "podAntiAffinity": {
                "requiredDuringSchedulingRequiredDuringExecution": [{
                  "labelSelector": {
                    "matchExpressions": [{
                      "key": "app",
                      "operator": "In",
                      "values": ["zk-headless"]
                    }]
                  },
                  "topologyKey": "kubernetes.io/hostname"
                }]
              }
            }
    spec:
      containers:
      - name: k8szk
        imagePullPolicy: Always
        image: docker.io/4admin2root/k8szk:v1
        resources:
          requests:
            memory: "2Gi"
            cpu: "1"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        env:
        - name : ZK_ENSEMBLE
          valueFrom:
            configMapKeyRef:
              name: zk-config
              key: ensemble
        - name : ZK_HEAP_SIZE
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: jvm.heap
        - name : ZK_TICK_TIME
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_INIT_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: init
        - name : ZK_SYNC_LIMIT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: tick
        - name : ZK_MAX_CLIENT_CNXNS
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: client.cnxns
        - name: ZK_SNAP_RETAIN_COUNT
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: snap.retain
        - name: ZK_PURGE_INTERVAL
          valueFrom:
            configMapKeyRef:
                name: zk-config
                key: purge.interval
        - name: ZK_CLIENT_PORT
          value: "2181"
        - name: ZK_SERVER_PORT
          value: "2888"
        - name: ZK_ELECTION_PORT
          value: "3888"
        command:
        - sh
        - -c
        - zkGenConfig.sh && zkServer.sh start-foreground
        readinessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - "zkOk.sh"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: slow
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

kubectl create -f zookeeper.yaml
kubectl get pv

[root@cloud4ourself-k8s1 git]# kubectl get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM                  REASON    AGE
ceph-pv-zk                                 1Gi        RWO           Recycle         Available                                    3d
pvc-a55b5673-0317-11e7-9826-fa163eec323b   1Gi        RWO           Delete          Bound       default/datadir-zk-0             2d
pvc-a55f4a04-0317-11e7-9826-fa163eec323b   1Gi        RWO           Delete          Bound       default/datadir-zk-1             2d
pvc-a56146fd-0317-11e7-9826-fa163eec323b   1Gi        RWO           Delete          Bound       default/datadir-zk-2             2d
pvc-b6c940b1-03df-11e7-9826-fa163eec323b   10Gi       RWO           Delete          Bound       default/data-mysql-0             2d
pvc-b6d04c96-03df-11e7-9826-fa163eec323b   10Gi       RWO           Delete          Bound       default/data-mysql-1             2d
pvc-b6d65a38-03df-11e7-9826-fa163eec323b   10Gi       RWO           Delete          Bound       default/data-mysql-2             2d

kubectl get pvc
如果pvc状态有问题
kubectl describe pvc datadir-zk-0 查看报错。

可能遇到的问题

Events:
  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason          Message
  --------- --------    -----   ----                -------------   --------    ------          -------
  1m        14s     6   {persistentvolume-controller }          Warning     ProvisioningFailed  Failed to provision volume with StorageClass "slow": rbd: create volume failed, err: executable file not found in $PATH

这个问题是由于controller-manager在容器中且没有ceph client
解决办法就是临时登录到容器中进行安装
apt-get update && apt-get install ceph-common

mysql

kubectl create -f https://raw.githubusercontent.com/4admin2root/daocloud/master/statefulset/mysql-configmap.yaml
kubectl create -f https://raw.githubusercontent.com/4admin2root/daocloud/master/statefulset/mysql-services.yaml
kubectl create -f https://raw.githubusercontent.com/4admin2root/daocloud/master/statefulset/mysql-statefulset.yaml

你可能感兴趣的:(kubernetes statefulset 测试(ceph rbd))