管理存储是管理计算的一个明显问题。该PersistentVolume子系统为用户和管理员提供了一个API,用于抽象如何根据消费方式提供存储的详细信息。为此,我们引入了两个新的API资源:PersistentVolume和PersistentVolumeClaim
PersistentVolume(PV)是集群中由管理员配置的一段网络存储。 它是集群中的资源,就像节点是集群资源一样。 PV是容量插件,如Volumes,但其生命周期独立于使用PV的任何单个pod。 此API对象捕获存储实现的详细信息,包括NFS,iSCSI或特定于云提供程序的存储系统。
PersistentVolumeClaim(PVC)是由用户进行存储的请求。 它类似于pod。 Pod消耗节点资源,PVC消耗PV资源。Pod可以请求特定级别的资源(CPU和内存)。声明可以请求特定的大小和访问模式(例如,可以一次读/写或多次只读)。
虽然PersistentVolumeClaims允许用户使用抽象存储资源,但是PersistentVolumes对于不同的问题,用户通常需要具有不同属性(例如性能)。群集管理员需要能够提供各种PersistentVolumes不同的方式,而不仅仅是大小和访问模式,而不会让用户了解这些卷的实现方式。对于这些需求,有StorageClass 资源。
StorageClass为管理员提供了一种描述他们提供的存储的“类”的方法。 不同的类可能映射到服务质量级别,或备份策略,或者由群集管理员确定的任意策略。 Kubernetes本身对于什么类别代表是不言而喻的。 这个概念有时在其他存储系统中称为“配置文件”。
PVC和PV是一一对应的。
PV是群集中的资源。PVC是对这些资源的请求,并且还充当对资源的检查。PV和PVC之间的相互作用遵循以下生命周期:
Provisioning ——-> Binding ——–>Using——>Releasing——>Recycling
注:目前只有NFS和HostPath类型卷支持回收策略,AWS EBS,GCE PD,Azure Disk和Cinder支持删除(Delete)策略。
1. GCEPersistentDisk
2.AWSElasticBlockStore
3. AzureFile
4. AzureDisk
5. FC (Fibre Channel)
6.Flexvolume
7. Flocker
8. NFS
9. iSCSI
10. RBD (Ceph Block Device)
11. CephFS
12. Cinder (OpenStack block storage)
13. Glusterfs
14. VsphereVolume
15. Quobyte Volumes
16. HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
17. Portworx Volumes
18. ScaleIO Volumes
19. StorageOS
[root@server2 volumes]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-pd 1/1 Running 0 12m
[root@server2 volumes]# kubectl delete -f nfs.yaml ##先清理环境
[root@server2 volumes]# kubectl get pv
No resources found
[root@server2 volumes]# kubectl get pvc
No resources found in default namespace.
[root@server2 volumes]# kubectl get pod
No resources found in default namespace.
1. 安装配置NFS服务:(前面已经做过了)
yum install -y nfs-utils
mkdir -m 777 /nfsdata
vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
systemctl enable --now rpcbind
systemctl enbale --now nfs
2. 在server1:/nfsdata创建每个节点的环境
[root@server1 nfsdata]# mkdir pv1 pv2 pv3 ##创建相应的目录
[root@server1 nfsdata]# ll
total 0
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv1
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv2
drwxr-xr-x 2 root root 6 Feb 25 11:21 pv3
[root@server1 pv1]# echo www.westos.org > index.html ##分别书写测试文件
[root@server1 pv2]# echo www.redhat.org > index.html
[root@server1 pv3]# echo www.baidu.com > index.html
3. 给其他节点安装nfs服务
[root@server3 ~]# yum install nfs-utils -y ##都需要安装nfs服务
[root@server4 ~]# yum install nfs-utils -y
1. 创建pv
创建pv
[root@server2 volumes]# cat pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 172.25.200.1 # nfs IP 仓库的IP
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv2
server: 172.25.200.1 # nfs IP 仓库的IP
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv3
server: 172.25.200.1
[root@server2 volumes]# kubectl apply -f pv1.yaml ##应用
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@server2 volumes]# kubectl get pv ##查看pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 5Gi RWO Recycle Available nfs 6s
pv2 10Gi RWX Recycle Available nfs 6s
pv3 20Gi ROX Recycle Available nfs 6s
2. 创建pvc和pod
创建pvc和pod
[root@server2 volumes]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim ##pvc模式
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce ##单点读写
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany ##多点读写
resources:
requests:
storage: 10Gi
---
apiVersion: v1 ## 创建pod
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: myapp:v1
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-pv
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: pvc1
---
apiVersion: v1
kind: Pod
metadata:
name: test-pd-2
spec:
containers:
- image: myapp:v1
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nfs-pv-2
volumes:
- name: nfs-pv-2
persistentVolumeClaim: ##指定pvc
claimName: pvc2
[root@server2 volumes]# kubectl apply -f pvc.yaml
[root@server2 volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pv1 5Gi RWO nfs 9s
pvc2 Bound pv2 10Gi RWX nfs 9s
[root@server2 volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 5Gi RWO Recycle Bound default/pvc1 nfs 2m59s
pv2 10Gi RWX Recycle Bound default/pvc2 nfs 2m59s
pv3 20Gi ROX Recycle Available nfs 2m59s
[root@server2 volumes]# kubectl get pod
[root@server2 volumes]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pd 1/1 Running 0 112s 10.244.141.208 server3 <none> <none>
test-pd-2 1/1 Running 0 112s 10.244.22.8 server4 <none> <none>
[root@server2 volumes]# curl 10.244.141.208 ##访问ip,观察是否是自己书写的对应文件
www.westos.org
[root@server2 volumes]# curl 10.244.22.8
www.redhat.org
##删除
[root@server2 volumes]# kubectl delete pod 加pod名
[root@server2 volumes]# kubectl delete pv 加pv名
[root@server2 volumes]# kubectl delete pvc 加pvc名
清单连接:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy
StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到。
StorageClass的属性
更多属性查看:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
PV以 $ {namespace}-$ {pvcName}-$ {pvName}的命名格式提供(在NFS服务器上)
PV回收的时候以 archieved- n a m e s p a c e − {namespace}- namespace−{pvcName}-${pvName} 的命名格式(在NFS服务器上)
nfs-client-provisioner源码地址
nfs-client-provisioner源码地址(新)
1.清理环境
[root@server2 volumes]# kubectl delete -f pvc.yaml ##清理环境
[root@server2 volumes]# kubectl delete -f pv1.yaml
[root@server1 ~]# cd /nfsdata/ ##删除nfs端的数据
[root@server1 nfsdata]# ls
pv1 pv2 pv3
[root@server1 nfsdata]# rm -fr *
2.下载镜像,并上传到harbor仓库
[root@server1 nfsdata]# docker search k8s-staging-sig-storage
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
yuufnn/nfs-external-provisioner gcr.io/k8s-staging-sig-storage/nfs-subdir-ex… 0
heegor/nfs-subdir-external-provisioner Image backup for gcr.io/k8s-staging-sig-stor… 0
zelaxyz/nfs-subdir-external-provisioner #Dockerfile FROM gcr.io/k8s-staging-sig-stor… 0
yuufnn/nfs-subdir-external-provisioner gcr.io/k8s-staging-sig-storage/nfs-subdir-ex… 0
[root@server1 nfsdata]# docker pull heegor/nfs-subdir-external-provisioner:v4.0.0
[root@server1 nfsdata]# docker tag heegor/nfs-subdir-external-provisioner:v4.0.0 reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0
[root@server1 nfsdata]# docker push reg.westos.org/library/nfs-subdir-external-provisioner:v4.0.0
3.配置
[root@server2 volumes]# mkdir nfs-client
[root@server2 volumes]# cd nfs-client/
[root@server2 nfs-client]# pwd
/root/volumes/nfs-client
[root@server2 nfs-client]# vim nfs-client-provisioner.yaml
[root@server2 nfs-client]# pwd
/root/volumes/nfs-client
[root@server2 nfs-client]# ls
nfs-client-provisioner.yaml pvc.yaml
[root@server2 nfs-client]# cat nfs-client-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.200.1
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.200.1
path: /nfsdata
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true"
[root@server2 nfs-client]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: myapp:v1
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@server2 nfs-client]# cat pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: myapp:v1
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@server2 nfs-client]# kubectl create namespace nfs-client-provisioner ##创建相应的namespace,方便管理
[root@server2 nfs-client]# kubectl apply -f nfs-client-provisioner.yaml ##应用动态分配
[root@server2 nfs-client]# kubectl get pod -n nfs-client-provisioner ##查看生成的分配器pod
[root@server2 nfs-client]# kubectl get sc ##StorageClass
[root@server2 nfs-client]# kubectl apply -f pvc.yaml ##应用测试文件
[root@server2 nfs-client]# kubectl get pv ##
[root@server2 nfs-client]# kubectl get pvc ##
4. 测试
[root@server1 nfsdata]# ls ##按照命名规则生成数据卷
default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf
[root@server1 nfsdata]# cd default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf/
[root@server1 default-test-claim-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf]# echo www.westos.org > index.html
[root@server2 nfs-client]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pod 1/1 Running 0 5m5s 10.244.141.231 server3 <none> <none>
[root@server2 nfs-client]# curl 10.244.141.231
www.westos.org
2.下载镜像,并上传到harbor仓库(已经上传过了,所以显示已经存在)
4.测试
[root@server2 nfs-client]# vim demo.yaml
[root@server2 nfs-client]# cat demo.yaml ##测试没有StorageClass
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim-2
spec:
# storageClassName: managed-nfs-storage
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 5Gi
[root@server2 nfs-client]# kubectl apply -f demo.yaml
[root@server2 nfs-client]# kubectl get pvc ##没有指定一直处于pending状态
kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' ##模板
[root@server2 nfs-client]# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
##指定sc
[root@server2 nfs-client]# kubectl get sc ##查看是否设置成功
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 58m
##查看效果
[root@server2 nfs-client]# kubectl delete -f demo.yaml
[root@server2 nfs-client]# kubectl apply -f demo.yaml
[root@server2 nfs-client]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf 2Gi RWX managed-nfs-storage 55m
test-claim-2 Bound pvc-2262d8b4-c660-4301-aad5-2ec59516f14e 5Gi ROX managed-nfs-storage 2s
StatefulSet将应用状态抽象成了两种情况:
StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)- $(序号),从0开始。
StatefulSet还会为每一个Pod分配并创建一个同样编号的PVC。这样,kubernetes就可以通过Persistent Volume机制为这个PVC绑定对应的PV,从而保证每一个Pod都拥有一个独立的Volume。
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录。
## 1. 清理环境
[root@server2 nfs-client]# kubectl delete -f demo.yaml
[root@server2 nfs-client]# kubectl delete -f pvc.yaml
## 2. 配置
[root@server2 volumes]# pwd
/root/volumes
[root@server2 volumes]# mkdir statefulset
[root@server2 volumes]# cd statefulset/
[root@server2 statefulset]# vim service.yaml
[root@server2 statefulset]# cat service.yaml ##实验文件
apiVersion: v1 ##StatefulSet如何通过Headless Service维持Pod的拓扑状态
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None ##无头服务
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 2 ##副本数,如果只删除pod可以改为0。全部删除需要删除控制器
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: myapp:v1 ##myapp其实就是nginx
ports:
- containerPort: 80
name: web
volumeMounts: #PV和PVC的设计,使得StatefulSet对存储状态的管理成为了可能:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: managed-nfs-storage ##sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@server2 statefulset]# kubectl apply -f service.yaml
service/nginx-svc created
statefulset.apps/web created
[root@server2 statefulset]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 9s
web-1 1/1 Running 0 5s
[root@server2 statefulset]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-25c0739c-3a00-442e-8287-2b2f216cb676 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 15s
pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 11s
[root@server2 statefulset]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-25c0739c-3a00-442e-8287-2b2f216cb676 1Gi RWO managed-nfs-storage 17s
www-web-1 Bound pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59 1Gi RWO managed-nfs-storage 13s
[root@server2 statefulset]# kubectl describe svc nginx-svc ##查看svc详细信息
[root@server2 statefulset]# dig -t A web-0.nginx-svc.default.svc.cluster.local @10.96.0.10 ##可以通过dig查看
[root@server2 statefulset]# dig -t A nginx-svc.default.svc.cluster.local @10.96.0.10
## 3. 测试
[root@server1 nfsdata]# pwd
/nfsdata
[root@server1 nfsdata]# ls
archived-pvc-2262d8b4-c660-4301-aad5-2ec59516f14e
archived-pvc-bc952d4e-47a5-4ac4-9d95-5cd2e6132ebf
default-www-web-0-pvc-25c0739c-3a00-442e-8287-2b2f216cb676
default-www-web-1-pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59
[root@server1 nfsdata]# echo web-0 > default-www-web-0-pvc-25c0739c-3a00-442e-8287-2b2f216cb676/index.html
[root@server1 nfsdata]# echo web-1 > default-www-web-1-pvc-dde9af65-ae0a-4412-b704-e5b7f1abfe59/index.html
[root@server2 statefulset]# kubectl run demo --image=busyboxplus -it ##测试
/ # nslookup nginx-svc
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx-svc
Address 1: 10.244.22.11 web-0.nginx-svc.default.svc.cluster.local
Address 2: 10.244.141.211 web-1.nginx-svc.default.svc.cluster.local
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
kubectl 弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用.
$ kubectl get statefulsets <stateful-set-name>
改变StatefulSet副本数量:
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
如果StatefulSet开始由 kubectl apply 或 kubectl create --save-config 创建,更新StatefulSet manifests中的 .spec.replicas, 然后执行命令 kubectl apply:
$ kubectl apply -f <stateful-set-file-updated>
也可以通过命令 kubectl edit 编辑该字段:
$ kubectl edit statefulsets <stateful-set-name>
使用 kubectl patch:
$ kubectl patch statefulsets -p ‘{
“spec”:{
“replicas”:<new-replicas>}}’
官网教程
[root@server2 mysql]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql
labels:
app: mysql
data:
master.cnf: |
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf: |
# Apply this config only on slaves.
[mysqld]
super-read-only
[root@server2 mysql]# cat services.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
clusterIP: None
selector:
app: mysql
---
apiVersion: v1
kind: Service
metadata:
name: mysql-read
labels:
app: mysql
spec:
ports:
- name: mysql
port: 3306
selector:
app: mysql
[root@server2 mysql]# cat statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "1"
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 512Mi
livenessProbe:
exec:
command: ["mysqladmin", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave. (Need to remove the tailing semicolon!)
cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_slave_info xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm -f xtrabackup_binlog_info xtrabackup_slave_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
mysql -h 127.0.0.1 \
-e "$(<change_master_to.sql.in), \
MASTER_HOST='mysql-0.mysql', \
MASTER_USER='root', \
MASTER_PASSWORD='', \
MASTER_CONNECT_RETRY=10; \
START SLAVE;" || exit 1
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {
}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 5Gi
##镜像拉取
[root@server1 nfsdata]# docker pull mysql:5.7
[root@server1 nfsdata]# docker tag mysql:5.7 reg.westos.org/library/mysql:5.7
[root@server1 nfsdata]# docker push reg.westos.org/library/mysql:5.7
[root@server1 nfsdata]# docker pull yizhiyong/xtrabackup
[root@server1 nfsdata]# docker tag yizhiyong/xtrabackup:latest reg.westos.org/library/xtrabackup:1.0
[root@server1 nfsdata]# docker push reg.westos.org/library/xtrabackup:1.0
[root@server2 mysql]# kubectl apply -f configmap.yaml
configmap/mysql created
[root@server2 mysql]# kubectl describe cm mysql
Name: mysql
Namespace: default
Labels: app=mysql
Annotations: <none>
Data
====
master.cnf:
----
# Apply this config only on the master.
[mysqld]
log-bin
slave.cnf:
----
# Apply this config only on slaves.
[mysqld]
super-read-only
Events: <none>
[root@server2 mysql]# kubectl apply -f services.yaml
service/mysql created
service/mysql-read created
[root@server2 mysql]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d5h
mysql ClusterIP None <none> 3306/TCP 6s
mysql-read ClusterIP 10.109.234.245 <none> 3306/TCP 6s
[root@server2 mysql]# yum install mariadb -y ##需要安装数据库
[root@server2 mysql]# kubectl apply -f statefulset.yaml ##
[root@server2 mysql]# kubectl get pod
[root@server2 mysql]# kubectl get pvc
[root@server2 mysql]# kubectl get pv