本次实战涉及到的K8S、Helm、NFS、StorageClass等前置条件,它们的安装和使用请参考:
本次实战的操作系统和软件的版本信息如下:
接下来的实战之前,请您准备好:K8S、Helm、NFS、StorageClass;
apiVersion: v1
kind: Service
metadata:
name: zookeeper-nodeport
namespace: kafka-test
spec:
type: NodePort
ports:
- port: 2181
nodePort: 32181
selector:
app: zookeeper
release: kafka
找一台电脑安装kafka包,就能通过里面自带的命令远程连接和操作K8S的kafka了:
./kafka-topics.sh --list --zookeeper 192.168.50.135:32181
./kafka-topics.sh --create --zookeeper 192.168.50.135:32181 --replication-factor 1 --partitions 1 --topic test001
如下图,创建成功后再查看topic终于有内容了:
5. 查看名为test001的topic:
./kafka-topics.sh --describe --zookeeper 192.168.50.135:32181 --topic test001
./kafka-console-producer.sh --broker-list 192.168.50.135:31090 --topic test001
进入交互模式后,输入任何字符串再输入回车,就会将当前内容作为一条消息发送出去:
7. 再打开一个窗口,执行命令消费消息:
./kafka-console-consumer.sh --bootstrap-server 192.168.50.135:31090 --topic test001 --from-beginning
./kafka-consumer-groups.sh --bootstrap-server 192.168.50.135:31090 --list
如下图可见groupid等于console-consumer-21022
9. 执行命令查看groupid等于console-consumer-21022的消费情况:
./kafka-consumer-groups.sh --group console-consumer-21022 --describe --bootstrap-server 192.168.50.135:31090
如下图所示:
远程连接kafka体验基本功能完毕,查看、收发消息都正常,证明本次部署成功;
本次实战创建了很多资源:rbac、role、serviceaccount、pod、deployment、service,下面的脚本可以将这些资源清理掉(只剩NFS的文件没有被清理掉):
helm del --purge kafka
kubectl delete service zookeeper-nodeport -n kafka-test
kubectl delete storageclass managed-nfs-storage
kubectl delete deployment nfs-client-provisioner -n kafka-test
kubectl delete clusterrolebinding run-nfs-client-provisioner
kubectl delete serviceaccount nfs-client-provisioner -n kafka-test
kubectl delete role leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete rolebinding leader-locking-nfs-client-provisioner -n kafka-test
kubectl delete clusterrole nfs-client-provisioner-runner
kubectl delete namespace kafka-test
至此,K8S环境部署和验证kafka的实战就完成了,希望能给您提供一些参考;