K8S部署无Zookeeper模式的Kafka集群

一:镜像准备

        自己在k8s集群上随便找个节点先用docker搞一个java镜像,在里面装上kafka2.8,然后使用我上篇博客生成集群ID的方法生成一个ID,保存下来,其他不用干。保存下来的ID用ConfigMap搞到集群。

         例如

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-id
data:
  key: cp3xNauAQyq-CPd8bX3Rhg

二:挂载准备

       在你挂载的地方,或者跟我仅在本地测试环境一样使用hostpath也行。创建一个目录用来挂载。我的目录如下

        这个目录下的properties 是存放不同节点的配置文件的。如下

      kraft是官方原本的/config/kraft/下的配置文件,不懂看上篇文章或者百度,。。。可能没啥资料,不然也不用我自己研究踩坑。每一个节点配置目录下只需要配置你该配的文件就够了

例如kraft-controller1里面有如下

但你只需要配controller.properties,因为到时候这个容器上的kafka只使用这个配置文件

主要配置内容示例如下

############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=controller

# The node id associated with this instance's roles
node.id=1

# The connect string for the controller quorum
controller.quorum.voters=1@controller1.p-kfk-con1.default.svc.cluster.local:9093,[email protected]:9093,[email protected]:9093

kraft-broker1下配置borker.properties

主要配置如下:

############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker

# The node id associated with this instance's roles
node.id=4

# The connect string for the controller quorum
controller.quorum.voters=1@controller1.p-kfk-con1.default.svc.cluster.local:9093,[email protected]:9093,[email protected]:9093

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://broker1.p-kfk-con1.default.svc.cluster.local:9092
inter.broker.listener.name=PLAINTEXT

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://broker1.p-kfk-con1.default.svc.cluster.local:9092

对于Host这样写法不了解的查K8s Headless Service

三:Headless Service yaml

最重点的东西

apiVersion: v1
kind: Service
metadata:
  name: p-kfk-con1
spec:
  selector:
    app: kafka
  ports:
    - port: 9093
      name: controller
      targetPort: 9093
    - port: 9092
      name: broker
      targetPort: 9092
  clusterIP: None

四:kafka yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-con1
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: controller1
      subdomain: p-kfk-con1
      containers:
        - name: kafka-controller
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/raft-controller-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/controller.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/controller.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-controller1
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-controller1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-con2
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: controller2
      subdomain: p-kfk-con1
      containers:
        - name: kafka-controller
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/raft-controller-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/controller.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/controller.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-controller2
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-controller2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-con3
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: controller3
      subdomain: p-kfk-con1
      containers:
        - name: kafka-controller
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/raft-controller-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/controller.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/controller.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-controller3
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-controller3
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-bro1
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: broker1
      subdomain: p-kfk-con1
      containers:
        - name: kafka-broker
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/kraft-broker-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/broker.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/broker.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-broker1
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-broker1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-bro2
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: broker2
      subdomain: p-kfk-con1
      containers:
        - name: kafka-broker
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/kraft-broker-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/broker.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/broker.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-broker2
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-broker2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dep-kfk-bro3
spec:
  selector:
    matchLabels:
      app: kafka
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
    spec:
      hostname: broker3
      subdomain: p-kfk-con1
      containers:
        - name: kafka-broker
          image: registry.cn-hangzhou.aliyuncs.com/zhoulh/kafka:v1.0
          command:
            - "/bin/bash"
            - "-c"
            - |
              cd /opt/kafka_2.13-2.8.0;
              if [[ ! -d "/tmp/kraft-broker-logs" ]]; then
              ./bin/kafka-storage.sh format -t ${uuid} -c ./config/kraft/broker.properties;
              fi
              sleep 30;
              ./bin/kafka-server-start.sh ./config/kraft/broker.properties;
          env:
            - name: uuid
              valueFrom:
                configMapKeyRef:
                  key: key
                  name: cluster-id
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /opt/kafka_2.13-2.8.0/config/kraft
              name: config
            - mountPath: /tmp
              name: log
          resources:
            requests:
              memory: "256Mi"
      restartPolicy: Always
      volumes:
        - name: config
          hostPath:
            path: /opt/kafka/properties/kraft-broker3
        - name: log
          hostPath:
            path: /opt/kafka/tmp/kraft-broker3

 四:供其他系统使用kafka服务

  就一个service

apiVersion: v1
kind: Service
metadata:
  name: kfk-svc-out
spec:
  selector:
    app: kafka
  ports:
    - port: 9091
      name: broker
      targetPort: 9092
  clusterIP: 10.10.0.2

这样你其他系统在配置kafka连接的时候可以通过bootstrap-server = 10.10.0.2:9091配置

 

 

你可能感兴趣的:(大数据,kubernetes,kafka)