1.配置helm chart repo
kafka的helm chart还在孵化当中,使用前需要添加incubator的repo:helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator。
肉身在国内需要设置azure提供的镜像库地址:
helm repoaddstable http://mirror.azure.cn/kubernetes/chartshelm repoaddincubator http://mirror.azure.cn/kubernetes/charts-incubatorhelm repo listNAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts incubator http://mirror.azure.cn/kubernetes/charts-incubator
2.创建Kafka和Zookeeper的Local PV
2.1 创建Kafka的Local PV
这里的部署环境是本地的测试环境,存储选择Local Persistence Volumes。首先,在k8s集群上创建本地存储的StorageClass local-storage.yaml:
apiVersion:storage.k8s.io/v1kind:StorageClassmetadata:name:local-storageprovisioner:kubernetes.io/no-provisionervolumeBindingMode:WaitForFirstConsumerreclaimPolicy:Retain
kubectl apply -flocal-storage.yamlstorageclass.storage.k8s.io/local-storage created
这里要在node1、node2这两个k8s节点上部署3个kafka的broker节点,因此先在node1、node2上创建这3个kafka broker节点的Local PV kafka-local-pv.yaml:
apiVersion:v1kind:PersistentVolumemetadata:name:datadir-kafka-0spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/data-0nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node1---apiVersion:v1kind:PersistentVolumemetadata:name:datadir-kafka-1spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/data-1nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node2---apiVersion:v1kind:PersistentVolumemetadata:name:datadir-kafka-2spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/data-2nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node2
kubectlapply-fkafka-local-pv.yaml
根据上面创建的local pv,在node1上创建目录/home/kafka/data-0,在node2上创建目录/home/kafka/data-1和/home/kafka/data-2。
# node1mkdir-p /home/kafka/data-0# node2mkdir-p /home/kafka/data-1mkdir-p /home/kafka/data-2
2.2 创建Zookeeper的Local PV
这里要在node1、node2这两个k8s节点上部署3个zookeeper节点,因此先在node1、node2上创建这3个zookeeper节点的Local PV zookeeper-local-pv.yaml:
apiVersion:v1kind:PersistentVolumemetadata:name:data-kafka-zookeeper-0spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/zkdata-0nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node1---apiVersion:v1kind:PersistentVolumemetadata:name:data-kafka-zookeeper-1spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/zkdata-1nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node2---apiVersion:v1kind:PersistentVolumemetadata:name:data-kafka-zookeeper-2spec:capacity:storage:5Gi accessModes:-ReadWriteOncepersistentVolumeReclaimPolicy:RetainstorageClassName:local-storagelocal:path:/home/kafka/zkdata-2nodeAffinity:required:nodeSelectorTerms:-matchExpressions:-key: kubernetes.io/hostnameoperator:Invalues:-node2
kubectlapply-fzookeeper-local-pv.yaml
根据上面创建的local pv,在node1上创建目录/home/kafka/zkdata-0,在node2上创建目录/home/kafka/zkdata-1和/home/kafka/zkdata-2。
# node1mkdir-p /home/kafka/zkdata-0# node2mkdir-p /home/kafka/zkdata-1mkdir-p /home/kafka/zkdata-2
3.部署Kafka
编写kafka chart的vaule文件kafka-values.yaml:
replicas:3tolerations:-key: node-role.kubernetes.io/masteroperator:Existseffect:NoSchedule-key: node-role.kubernetes.io/masteroperator:Existseffect:PreferNoSchedulepersistence:storageClass:local-storagesize:5Gizookeeper:persistence:enabled:truestorageClass:local-storagesize:5GireplicaCount:3image:repository:gcr.azk8s.cn/google_samples/k8szktolerations:-key: node-role.kubernetes.io/masteroperator:Existseffect:NoSchedule-key: node-role.kubernetes.io/masteroperator:Existseffect:PreferNoSchedule
安装过程需要使用到gcr.io/google_samples/k8szk:v3等docker镜像,切换成使用azure的GCR Proxy Cache:gcr.azk8s.cn。
helminstall--name kafka --namespace kafka -f kafka-values.yaml incubator/kafka
最后需要确认所有的pod都处于running状态:
kubectlget pod -n kafka -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkafka-01/1Running012m10.244.0.61node1 kafka-11/1Running06m3s10.244.1.12node2 kafka-21/1Running02m26s10.244.1.13node2 kafka-zookeeper-01/1Running012m10.244.1.9node2 kafka-zookeeper-11/1Running011m10.244.1.10node2 kafka-zookeeper-21/1Running011m10.244.1.11node2 kubectl get statefulset -n kafkaNAME READY AGEkafka3/322mkafka-zookeeper3/322mkubectl get service -n kafkaNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkafka ClusterIP10.102.8.1929092/TCP31mkafka-headless ClusterIP None 9092/TCP31mkafka-zookeeper ClusterIP10.110.43.2032181/TCP31mkafka-zookeeper-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP31m
可以看到当前kafka的helm chart,采用StatefulSet的形式部署了kafka和zookeeper,而我们通过Local PV的形式,将kafka-0调度到node1上,将kafka-1和kafka-2调度到node2上。
4.安装后的测试
在k8s集群内运行下面的客户端Pod,访问kafka broker进行测试:
apiVersion:v1kind:Podmetadata:name:testclientnamespace:kafkaspec:containers:-name: kafkaimage:confluentinc/cp-kafka:5.0.1command:-sh--c-"exec tail -f /dev/null"
创建并进入testclient容器内:
kubectlapply -f testclient.yamlkubectl-n kafka exec testclient -it sh
查看kafka相关命令:
ls /usr/bin/ | grep kafkakafka-aclskafka-broker-api-versionskafka-configskafka-console-consumerkafka-console-producerkafka-consumer-groupskafka-consumer-perf-testkafka-delegation-tokenskafka-delete-recordskafka-dump-logkafka-log-dirskafka-mirror-makerkafka-preferred-replica-electionkafka-producer-perf-testkafka-reassign-partitionskafka-replica-verificationkafka-run-classkafka-server-startkafka-server-stopkafka-streams-application-resetkafka-topicskafka-verifiable-consumerkafka-verifiable-producer
创建一个Topic test1:
kafka-topics--zookeeperkafka-zookeeper:2181--topictest1--create--partitions1--replication-factor1
查看的Topic:
kafka-topics --zookeeper kafka-zookeeper:2181--listtest1
5.总结
当前基于Helm官方仓库的chartincubator/kafka在k8s上部署的kafka,使用的镜像是confluentinc/cp-kafka:5.0.1。 即部署的是Confluent公司提供的kafka版本。Confluent Platform Kafka(简称CP Kafka)提供了一些Apache Kafka没有的高级特性,例如跨数据中心备份、Schema注册中心以及集群监控工具等。CP Kafka目前分为免费版本和企业版两种,免费版除了Apache Kafka的标准组件外还包含Schema注册中心和Rest Proxy。
Confluent Platform and Apache Kafka Compatibility中给出了Confluent Kafka和Apache Kafka的版本对应关系,可以看出这里安装的cp 5.0.1对应Apache Kafka的2.0.x。
进入一个broker容器中,查看:
ls/usr/share/java/kafka | grep kafkakafka-clients-2.0.1-cp1.jarkafka-log4j-appender-2.0.1-cp1.jarkafka-streams-2.0.1-cp1.jarkafka-streams-examples-2.0.1-cp1.jarkafka-streams-scala_2.11-2.0.1-cp1.jarkafka-streams-test-utils-2.0.1-cp1.jarkafka-tools-2.0.1-cp1.jarkafka.jarkafka_2.11-2.0.1-cp1-javadoc.jarkafka_2.11-2.0.1-cp1-scaladoc.jarkafka_2.11-2.0.1-cp1-sources.jarkafka_2.11-2.0.1-cp1-test-sources.jarkafka_2.11-2.0.1-cp1-test.jarkafka_2.11-2.0.1-cp1.jar
可以看到对应apache kafka的版本号是2.11-2.0.1,前面2.11是Scala编译器的版本,Kafka的服务器端代码是使用Scala语言开发的,后边2.0.1是Kafka的版本。 即CP Kafka 5.0.1是基于Apache Kafka 2.0.1的。
参考
Zookeeper Helm Chart
Kafka Helm Chart
GCR Proxy Cache 帮助
Confluent Platform and Apache Kafka Compatibility
源网络,版权归原创者所有。如有侵权烦请告知,我们会立即删除并表示歉意。谢谢。
了解更多技术,欢迎关注下方公众号