zookeeper和k8s_Kubernetes集群中部署ZooKeeper集群

Kubernetes集群中部署ZooKeeper集群有两种方式,一是基于kubernetes的镜像kubernetes-zookeeper,另外一种是基于原生的镜像zookeeper,基于kubernetes-zookeeper的部署可以参考kubernetes官网文档 https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/ ,阿汤博客今天主要介绍下基于原生镜像的部署,直接查看zookeeper.yml配置文件(未配置数据持久化)。

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: zoo1

namespace: zyts

spec:

selector:

matchLabels:

app: zoo1

serviceName: zoo1-service

replicas: 1

template:

metadata:

labels:

app: zoo1

spec:

restartPolicy: Always

containers:

- name: zoo1

image: zookeeper:3.4.14

imagePullPolicy: IfNotPresent

ports:

- containerPort: 2181

- containerPort: 2888

- containerPort: 3888

protocol: TCP

resources:

limits:

cpu: 1000m

requests:

cpu: 100m

env:

- name: ZOO_MY_ID

value: "1"

- name: ZOO_SERVERS

value: server.1=zoo1-0.zoo1-service.zyts.svc.cluster.local:2888:3888 server.2=zoo2-0.zoo2-service.zyts.svc.cluster.local:2888:3888 server.3=zoo3-0.zoo3-service.zyts.svc.cluster.local:2888:3888

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: zoo2

namespace: zyts

spec:

selector:

matchLabels:

app: zoo2

serviceName: zoo2-service

replicas: 1

template:

metadata:

labels:

app: zoo2

spec:

restartPolicy: Always

containers:

- name: zoo2

image: zookeeper:3.4.14

imagePullPolicy: IfNotPresent

ports:

- containerPort: 2181

- containerPort: 2888

- containerPort: 3888

protocol: TCP

resources:

limits:

cpu: 1000m

requests:

cpu: 100m

env:

- name: ZOO_MY_ID

value: "2"

- name: ZOO_SERVERS

value: server.1=zoo1-0.zoo1-service.zyts.svc.cluster.local:2888:3888 server.2=zoo2-0.zoo2-service.zyts.svc.cluster.local:2888:3888 server.3=zoo3-0.zoo3-service.zyts.svc.cluster.local:2888:3888

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

name: zoo3

namespace: zyts

spec:

selector:

matchLabels:

app: zoo3

serviceName: zoo3-service

replicas: 1

template:

metadata:

labels:

app: zoo3

spec:

restartPolicy: Always

containers:

- name: zoo3

image: zookeeper:3.4.14

imagePullPolicy: IfNotPresent

ports:

- containerPort: 2181

- containerPort: 2888

- containerPort: 3888

protocol: TCP

resources:

limits:

cpu: 1000m

requests:

cpu: 100m

env:

- name: ZOO_MY_ID

value: "3"

- name: ZOO_SERVERS

value: server.1=zoo1-0.zoo1-service.zyts.svc.cluster.local:2888:3888 server.2=zoo2-0.zoo2-service.zyts.svc.cluster.local:2888:3888 server.3=zoo3-0.zoo3-service.zyts.svc.cluster.local:2888:3888

---

apiVersion: v1

kind: Service

metadata:

name: zoo1-service

namespace: zyts

spec:

ports:

- protocol: TCP

port: 2181

targetPort: 2181

name: client

- protocol: TCP

port: 2888

targetPort: 2888

name: leader

- protocol: TCP

port: 3888

targetPort: 3888

name: leader-election

selector:

app: zoo1

---

apiVersion: v1

kind: Service

metadata:

name: zoo2-service

namespace: zyts

spec:

ports:

- protocol: TCP

port: 2181

targetPort: 2181

name: client

- protocol: TCP

port: 2888

targetPort: 2888

name: leader

- protocol: TCP

port: 3888

targetPort: 3888

name: leader-election

selector:

app: zoo2

---

apiVersion: v1

kind: Service

metadata:

name: zoo3-service

namespace: zyts

spec:

ports:

- protocol: TCP

port: 2181

targetPort: 2181

name: client

- protocol: TCP

port: 2888

targetPort: 2888

name: leader

- protocol: TCP

port: 3888

targetPort: 3888

name: leader-election

selector:

app: zoo3

执行

kubectl apply -f zookeeper.yml

查看部署情况:

集群内部服务调用zookeeper地址:zoo1-0.zoo1-service.zyts.svc.cluster.local:2181,zoo2-0.zoo2-service.zyts.svc.cluster.local:2181,zoo3-0.zoo3-service.zyts.svc.cluster.local:2181

登录任意一个zookeeper pod,测试集群是否可用。

喜欢 (0)or分享 (0)

你可能感兴趣的:(zookeeper和k8s)