k8s集群安装部署kafka、zookeeper集群

k8s集群安装部署kafka、zookeeper集群

注: 有状态、可持续换集成
redis集群有两种部署方式
StatefulSet
Service&Deployment

(每个PV的容量根据自己的机器实际情况改写,集群节点的CPU、内存、容量也根据实际情况自己改写)

一、创建NFS存储

找一个k8s节点安装(我是在master节点安装)

  1. 创建NFS存储
    创建NFS存储主要是为了给kafka、zookeeper提供稳定的后端存储,当kafka、zookeeper的Pod发生故障重启或迁移后,依然能获得原先的数据。
    这里,我们先要创建NFS,然后通过使用PV为kafka、zookeeper挂载一个远程的NFS路径。
  2. 安装NFS
yum -y install nfs-utils (提供文件系统)
yum -y install rpcbind	(提供rpc协议)
  1. 创建共享文件
mkdir -p /usr/local/k8s/zookeeper/pv{1..3}
mkdir -p /usr/local/k8s/kafka/pv{1..3}
vim /etc/exports
#添加共享路径,增加权限
/usr/local/k8s/kafka/pv1 192.168.2.0/24(rw,sync,no_root_squash)
/usr/local/k8s/kafka/pv2 192.168.2.0/24(rw,sync,no_root_squash)
/usr/local/k8s/kafka/pv3 192.168.2.0/24(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv1 192.168.2.0/24(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv2 192.168.2.0/24(rw,sync,no_root_squash)
/usr/local/k8s/zookeeper/pv3 192.168.2.0/24(rw,sync,no_root_squash
  1. 重启服务
systemctl restart rpcbind
systemctl restart nfs
systemctl enable nfs
  1. 查看exportfs
exportfs  -v
  1. 在其他的节点安装nfs-utils客户端(我是在其他的node节点安装)
yum -y install nfs-util
  1. 在其他及诶单查看存储共享
showmount -e 192.168.2.228

二、创建zookeeper集群

  1. 创建zookeeper PV
vim pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/zookeeper/pv1"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk2
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/zookeeper/pv2"
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk3
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/zookeeper/pv3"

创建pv

kubectl create -f pv.yaml 
  1. 创建zookeeper集群节点
vim zookeeper.yaml
apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: IfNotPresent
        image: "leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10"
        command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
        resources:
          requests:
            memory: "500M"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 500M

创建节点

kubectl create -f zookeeper.yaml

查看集群

for i in 0 1 2; do kubectl exec zk-$i zkServer.sh status; done

三、创建kafka集群

  1. 创建kafka PV
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-kafka01
  namespace: tools
  labels:
    app: kafka
  annotations:
    volume.beta.kubernetes.io/storage-class: "mykafka"
spec:
  capacity:
    storage: 500M
  accessModes:
  - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/kafka/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-kafka02
  namespace: tools
  labels:
    app: kafka
  annotations:
    volume.beta.kubernetes.io/storage-class: "mykafka"
spec:
  capacity:
    storage: 500M
  accessModes:
  - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/kafka/pv2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: k8s-pv-kafka03
  namespace: tools
  labels:
    app: kafka
  annotations:
    volume.beta.kubernetes.io/storage-class: "mykafka"
spec:
  capacity:
    storage: 500M
  accessModes:
  - ReadWriteOnce
  nfs:
    server: 192.168.2.228
    path: "/usr/local/k8s/kafka/pv3"
---

创建pv

kubectl create -f pv.yaml
  1. 创建namespace
kubectl create namespace tools
  1. 创建kafka集群节点
vim kafak.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: kafka-hs
  namespace: tools
  labels:
    app: kafka
spec:
  ports:
  - port: 9092
    name: server
  clusterIP: None
  selector:
    app: kafka
--- 
apiVersion: v1
kind: Service
metadata:
  name: kafka-cs
  namespace: tools
  labels:
    app: kafka
spec:
  selector:
    app: kafka
  type: NodePort
  ports:
  - name: client
    port: 9092
  #  nodePort: 19092
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: kafka-pdb
  namespace: tools
spec:
  selector:
    matchLabels:
      app: kafka
  minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: tools
spec:
  serviceName: kafka-hs
  replicas: 3
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - kafka
              topologyKey: "kubernetes.io/hostname"
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                        - zk
                 topologyKey: "kubernetes.io/hostname"
      terminationGracePeriodSeconds: 300
      containers:
      - name: kafka
        imagePullPolicy: IfNotPresent
        image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
        command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
        resources:
          requests:
            memory: "200M"
            cpu: 500m
        ports:
        - containerPort: 9092
          name: server
        command:
        - sh
        - -c
        - "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9092 \
          --override zookeeper.connect=zk-0.zk-hs.default.svc.cluster.local:2181,zk-1.zk-hs.default.svc.cluster.local:2181,zk-2.zk-hs.default.svc.cluster.local:2181 \
          --override log.dir=/var/lib/kafka \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=true \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=9223372036854775807 \
          --override log.flush.offset.checkpoint.interval.ms=60000 \
          --override log.flush.scheduler.interval.ms=9223372036854775807 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=1073741824 \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=1000012 \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offset.metadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=104857600 \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=10000 \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=102400 \
          --override socket.request.max.bytes=104857600 \
          --override socket.send.buffer.bytes=102400 \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          --override inter.broker.protocol.version=0.10.2-IV0 \
          --override log.cleaner.backoff.ms=15000 \
          --override log.cleaner.dedupe.buffer.size=134217728 \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=10485760 \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=1 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=1048576 \
          --override replica.fetch.response.max.bytes=10485760 \
          --override reserved.broker.max.id=1000 "
        env:
        - name: KAFKA_HEAP_OPTS
          value : "-Xmx300M -Xms200M"
        - name: KAFKA_OPTS
          value: "-Dlogging.level=INFO"
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/kafka
        readinessProbe:
          exec:
           command:
            - sh
            - -c
            - "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "mykafka"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 200M

创建kafka集群节点

kubectl create -f kafka.yaml

查看集群

#进入其中一个容器
kubectl exec -it -n tools kafka-0 /bin/bash
#创建一个topic
kafka-topics.sh --create --topic mytest --zookeeper 10.1.89.29:2181 --partitions 1 --replication-factor 1
#随便发一些消息
kafka-console-producer.sh --topic test --broker-list localhost:9092
#接收一些消息
kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092

也可以根据service的ip+port连接kafka和zookeeper集群

可能中间还会出现问题,请谅解,谢谢大家支持!!!!

你可能感兴趣的:(Kubernetes,zookeeper,kafka,kubernetes,zookeeper,kafka)