myid file in 3 zookeeper nodes on k8s : 在 glusterfs 上文件分布

k8s 集群搭建参考

https://github.com/4admin2root/ansible-kubeadm
zookeeper define file:
https://github.com/4admin2root/daocloud/blob/master/statefulset/zookeeper.yaml

hosts

10.9.5.64 cloud4ourself-k8sprod6
10.9.5.65 cloud4ourself-k8sprod5
10.9.5.69 cloud4ourself-k8sprod4
10.9.5.75 cloud4ourself-k8sprod3

zookeeper in k8s

[root@cloud4ourself-k8sprod6 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-calc-rc-68f4v 1/1 Running 0 21h 10.40.0.67 cloud4ourself-k8sprod3.novalocal
my-calc-rc-9l17p 1/1 Running 0 21h 10.32.0.58 cloud4ourself-k8sprod1.novalocal
my-frontend-rc-55bp0 1/1 Running 0 21h 10.40.0.68 cloud4ourself-k8sprod3.novalocal
my-frontend-rc-t8vgk 1/1 Running 0 21h 10.39.0.47 cloud4ourself-k8sprod2.novalocal
zk-0 1/1 Running 0 1h 10.39.0.48 cloud4ourself-k8sprod2.novalocal
zk-1 1/1 Running 0 34m 10.42.0.77 cloud4ourself-k8sprod4.novalocal
zk-2 1/1 Running 0 32m 10.44.0.112 cloud4ourself-k8sprod5.novalocal

round 1: files in glusterfs and heketi

[root@cloud4ourself-k8sprod5 mounts]# pwd
/var/lib/heketi/mounts
===================zone 1
[root@cloud4ourself-k8sprod5 mounts]# find . -name myid |xargs cat
1

[root@cloud4ourself-k8sprod6 mounts]# find . -name myid |xargs cat
2
3
================zone 2
[root@cloud4ourself-k8sprod3 mounts]# find . -name myid |xargs cat
3
[root@cloud4ourself-k8sprod4 mounts]# find . -name myid |xargs cat
1
2

heketi topology file

[root@cloud4ourself-k8sprod6 ~]# cat /etc/heketi/topology.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"10.9.5.64"
],
"storage": [
"10.9.5.64"
]
},
"zone": 1
},
"devices": [
"/dev/vdc" ]
},
{
"node": {
"hostnames": {
"manage": [
"10.9.5.65"
],
"storage": [
"10.9.5.65"
]
},
"zone": 1
},
"devices": [
"/dev/vdc" ]
},
{
"node": {
"hostnames": {
"manage": [
"10.9.5.69"
],
"storage": [
"10.9.5.69"
]
},
"zone": 2
},
"devices": [
"/dev/vdc" ]
},
{
"node": {
"hostnames": {
"manage": [
"10.9.5.75"
],
"storage": [
"10.9.5.75"
]
},
"zone": 2
},
"devices": [
"/dev/vdc" ]
}
]
}
]
}

storageclass define file for k8s

[root@cloud4ourself-k8sprod6 heketi]# cat glusterfs-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.9.5.243:8080"
clusterid: "1249abb80755eb4d376baf1630015abf"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:2"


apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: default
data:
# base64 encoded password. E.g.: echo -n "mypassword" | base64
key: xxxxxxxxx==
type: kubernetes.io/glusterfs

scale statefulsets zk

[root@cloud4ourself-k8sprod6 mounts]# kubectl edit cm zk-config
修改增加参数ensemble zk-0;zk-1;zk-2;zk-3;zk-4
[root@cloud4ourself-k8sprod6 heketi]# kubectl scale --replicas=5 statefulsets/zk
statefulset "zk" scaled
[root@cloud4ourself-k8sprod6 heketi]# kubectl get statefulsets
NAME DESIRED CURRENT AGE
zk 5 3 3h
[root@cloud4ourself-k8sprod6 heketi]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-calc-rc-68f4v 1/1 Running 0 22h 10.40.0.67 cloud4ourself-k8sprod3.novalocal
my-calc-rc-9l17p 1/1 Running 0 22h 10.32.0.58 cloud4ourself-k8sprod1.novalocal
my-frontend-rc-55bp0 1/1 Running 0 22h 10.40.0.68 cloud4ourself-k8sprod3.novalocal
my-frontend-rc-t8vgk 1/1 Running 0 22h 10.39.0.47 cloud4ourself-k8sprod2.novalocal
zk-0 1/1 Running 0 3h 10.39.0.48 cloud4ourself-k8sprod2.novalocal
zk-1 1/1 Running 0 1h 10.42.0.77 cloud4ourself-k8sprod4.novalocal
zk-2 1/1 Running 0 1h 10.44.0.112 cloud4ourself-k8sprod5.novalocal
zk-3 0/1 ContainerCreating 0 2m cloud4ourself-k8sprod1.novalocal

[root@cloud4ourself-k8sprod6 mounts]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-calc-rc-68f4v 1/1 Running 0 23h
my-calc-rc-9l17p 1/1 Running 0 23h
my-frontend-rc-55bp0 1/1 Running 0 23h
my-frontend-rc-t8vgk 1/1 Running 0 23h
zk-0 1/1 Running 0 3h
zk-1 1/1 Running 0 1h
zk-2 1/1 Running 0 1h
zk-3 1/1 Running 7 17m
zk-4 1/1 Running 0 3m

round 2: files in glusterfs and heketi

[root@cloud4ourself-k8sprod5 mounts]# pwd
/var/lib/heketi/mounts
===================zone 1
[root@cloud4ourself-k8sprod5 mounts]# find . -name myid |xargs cat
1
4
[root@cloud4ourself-k8sprod6 mounts]# find . -name myid |xargs cat
2
3
5
================zone 2
[root@cloud4ourself-k8sprod3 mounts]# find . -name myid |xargs cat
3
4
[root@cloud4ourself-k8sprod4 mounts]# find . -name myid |xargs cat
1
2
5

你可能感兴趣的:(myid file in 3 zookeeper nodes on k8s : 在 glusterfs 上文件分布)