2019独角兽企业重金招聘Python工程师标准>>>
主要参考了https://www.cnblogs.com/00986014w/p/9561901.html 这篇博文,但他zookeeper使用的不是官方镜像
我使用的是3个节点的Zookeeper集群,可以参照着修改。
搭建Zookeeper集群
集群Service的zookeeper-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-1
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster2
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-2
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-3
name: zookeeper-cluster3
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-3
通过sudo kubectl create -f zookeeper-svc.yaml
创建3个Service。
集群Deployment的zookeeper-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-1
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-1
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-1
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: "server.1=0.0.0.0:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=zookeeper-cluster3:2888:3888"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster-2
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster-2
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-2
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "2"
- name: ZOO_SERVERS
value: "server.1=zookeeper-cluster1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper-cluster3:2888:3888"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-3
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-3
name: zookeeper-cluster-3
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-3
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "3"
- name: ZOO_SERVERS
value: "server.1=zookeeper-cluster1:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=0.0.0.0:2888:3888"
通过sudo kubectl create -f zookeeper-deployment.yaml
创建3个Deployment。
检查集群是否启动成功
等3个Pod成为Running
状态后,通过sudo kubectl log zookeeper-cluster-1-xxxxxx
检查日志中是否有错误。
然后通过sudo kubectl exec -it zookeeper-cluster-1-676df4686f-c7b6d /bin/bash
分别进入两个Pod,执行/bin/zkCli.sh
分别创建查看,试试是否能成功。
搭建Kafka集群
集群Service的kafka-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster1
labels:
app: kafka-cluster-1
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-1
targetPort: 9092
nodePort: 30091
protocol: TCP
selector:
app: kafka-cluster-1
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster2
labels:
app: kafka-cluster-2
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-2
targetPort: 9092
nodePort: 30092
protocol: TCP
selector:
app: kafka-cluster-2
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster3
labels:
app: kafka-cluster-3
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-3
targetPort: 9092
nodePort: 30093
protocol: TCP
selector:
app: kafka-cluster-3
通过sudo kubectl create -f kafka-svc.yaml
创建3个Service。
集群Deployment的kafka-deployment.yaml
这里需要注意,要把env
中KAFKA_ADVERTISED_HOST_NAME
改成各个Pod对应Service的ClusterIP。
PS: 如果上面zookeeper的service和我定义的不同,就对应着修改KAFKA_ZOOKEEPER_CONNECT
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-1
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-1
template:
metadata:
labels:
name: kafka-cluster-1
app: kafka-cluster-1
spec:
containers:
- name: kafka-cluster-1
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster1 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "1"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-2
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-2
template:
metadata:
labels:
name: kafka-cluster-2
app: kafka-cluster-2
spec:
containers:
- name: kafka-cluster-2
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster2 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "2"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-3
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-3
template:
metadata:
labels:
name: kafka-cluster-3
app: kafka-cluster-3
spec:
containers:
- name: kafka-cluster-3
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster3 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "3"
通过sudo kubectl create -f zookeeper-deployment.yaml
创建3个Deployment。
检查集群是否启动成功
等3个Pod成为Running
状态后,通过sudo kubectl log kafka-cluster-1-xxxxxx
检查日志中是否有错误。
然后通过sudo kubectl exec -it -sudo kubectl exec -it kafka-cluster-1-558747bc7d-5n94p /bin/bash
进入Pod,执行kafka-console-producer.sh --broker-list [zookeeper-cluster1 的 ClusterIP]:9092 --topic test
创建了topic test
。
通过sudo kubectl exec -it kafka-cluster-2-66c88f759b-8wlvp /bin/bash
进入Pod,执行kafka-console-consumer.sh --bootstrap-server [zookeeper-cluster2 的 ClusterIP]:9092 --topic test --from-beginning
接收topic test
的消息。
然后试着在cluster-1中发送消息,看看在cluster-2中是否能接收。