关于zookeeper使用rancher搭建的集群,数据持久化的问题

关于zookeeper使用rancher搭建的集群,数据持久化的问题

通常大数据集群往往离不开zookeeper. 在docker兴起的时代里,使用docker部署zookeeper并不是很困难。
那docker版的zookeeper如何做到数据持久化呢?
下面我们来一起学习一下,请大家多多指教:
关于zookeeper使用rancher搭建的集群,数据持久化的问题_第1张图片上图是我自己用rancher搭建的集群,并且用rancher搭建起来的zookeeper
本来想把zookeeper的数据目录通过映射的方式来挂载出来。但是由于目录不是挂载容器时创建出来的,所以只能挂载本级目录。
然后我通过yarm文件的改动,并且借助于nfs来挂载数据卷。
下面是我的yarm文件:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
creationTimestamp: 2019-01-10T07:59:05Z
generation: 6
labels:
app: zookeeper
chart: zookeeper-0.1.0
heritage: Tiller
io.cattle.field/appId: kafka
release: kafka
name: kafka-zookeeper
namespace: hadoop
resourceVersion: “6999639”
selfLink: /apis/apps/v1beta2/namespaces/hadoop/statefulsets/kafka-zookeeper
uid: 99cc4e2b-14ad-11e9-b3fc-141877489f73
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: zookeeper
release: kafka
serviceName: kafka-zookeeper-headless
template:
metadata:
annotations:
cattle.io/timestamp: 2019-01-11T02:46:38Z
field.cattle.io/ports: ‘[[{“containerPort”:2181,“dnsName”:“kafka-zookeeper-”,“name”:“client”,“protocol”:“TCP”,“sourcePort”:0},{“containerPort”:2888,“dnsName”:“kafka-zookeeper-”,“name”:“server”,“protocol”:“TCP”,“sourcePort”:0},{“containerPort”:3888,“dnsName”:“kafka-zookeeper-”,“name”:“leader-election”,“protocol”:“TCP”,“sourcePort”:0}]]’
creationTimestamp: null
labels:
app: zookeeper
release: kafka
spec:
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- zookeeper
- key: release
operator: In
values:
- kafka
topologyKey: kubernetes.io/hostname
weight: 50
containers:
- command:
- bash
- -c
- ZOOKEEPER_SERVER_ID= ( ( (( (({HOSTNAME##*-}+1)) && /etc/confluent/docker/run
env:
- name: TZ
value: Asia/Shanghai
- name: ZOOKEEPER_TICK_TIME
valueFrom:
configMapKeyRef:
key: tick
name: kafka-zookeeper
optional: false
- name: ZOOKEEPER_SYNC_LIMIT
valueFrom:
configMapKeyRef:
key: tick
name: kafka-zookeeper
optional: false
- name: ZOOKEEPER_SERVERS
valueFrom:
configMapKeyRef:
key: servers
name: kafka-zookeeper
optional: false
- name: ZOOKEEPER_CLIENT_PORT
valueFrom:
configMapKeyRef:
key: client_port
name: kafka-zookeeper
optional: false
- name: ZOOKEEPER_AUTOPURGE_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
key: purge_interval
name: kafka-zookeeper
optional: false
- name: ZOOKEEPER_SERVER_ID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: confluentinc/cp-zookeeper:4.1.1
imagePullPolicy: IfNotPresent
name: kafka-zookeeper
ports:
- containerPort: 2181
name: client
protocol: TCP
- containerPort: 2888
name: server
protocol: TCP
- containerPort: 3888
name: leader-election
protocol: TCP
resources: {}
securityContext:
capabilities: {}
procMount: Default
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/zookeeper
name: data

dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
runAsUser: 1000
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: data
updateStrategy:
type: RollingUpdate
status:
collisionCount: 0
currentReplicas: 3
currentRevision: kafka-zookeeper-89f8d74dc
observedGeneration: 6
readyReplicas: 3
replicas: 3
updateRevision: kafka-zookeeper-89f8d74dc
updatedReplicas: 3

通过上面的配置文件,达到需求每添加一个zookeeper都会自行添加一个pvc,并且pvc会落到宿主机的磁盘里。

你可能感兴趣的:(关于zookeeper使用rancher搭建的集群,数据持久化的问题)