k8s存储卷(Volume)和docker存储卷功能类似,都是用来提供数据永久存储或数据共享的。不过docker存储卷是挂载在容器上,可以伴随着容器的生命周期,而k8s存储卷是挂载在pod上,可以伴随着pod的生命周期,因此pod中的容器重启不影响卷,但pod被重建时卷可能会被影响。
k8s支持很多种卷,这里仅介绍一些简单的存储卷:emptyDir,hostpath,nfs, pvc、pv,configmap和secret。分布式存储卷或云存储不做介绍。
# 指定挂载卷的类型
kubectl explain pod.spec.volumes
# 指定每个容器的挂载点
kubectl explain pod.spec.containers.volumeMounts
卷使用注意:
当Pod分配到Node上时,将会创建emptyDir,并且自动分配对应Node的某个空目录。只要Node上的Pod一直运行,emptyDir就会一直存在。本文中只有emptyDir是伴随Pod生命周期的,当Pod(不管任何原因)从Node上被删除时,emptyDir也同时会删除,存储的数据也将永久删除。注:删除Pod中容器不影响emptyDir。
emptyNode使用场景:
1)程序临时目录
2)memory:在以内存文件系统(tmps)挂载到系统中时,如果node重启,emptyDir会自动清空;同时写入emptyDir的任何文件都将计入Container的内存使用限制。
# cat emptyDir-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: emptydir-demo
namespace: default
labels:
demo: emptydir
spec:
containers:
- name: nginx
image: nginx:1.17.5-alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
- "while true; do date >> /www/index.html ;sleep 10;done"
volumeMounts:
- name: html
mountPath: /www
volumes:
- name: html
emptyDir: {}
创建访问测试,可以看到emptydir是共享的
# curl 10.244.1.62
Thu Dec 05 04:16:03 UTC 2019
# kubectl exec -it emptydir-demo -c nginx cat /usr/share/nginx/html/index.html
Thu Dec 05 04:16:03 UTC 2019
Thu Dec 05 04:16:13 UTC 2019
Thu Dec 05 04:16:23 UTC 2019
# kubectl exec -it emptydir-demo -c busybox cat /www/index.html
Thu Dec 05 04:16:03 UTC 2019
Thu Dec 05 04:16:13 UTC 2019
Thu Dec 05 04:16:23 UTC 2019
删除重建pod看下,emptydir也被删除重建了
# kubectl replace -f emptyDir-demo.yaml --force
# curl 10.244.1.63
Thu Dec 05 04:24:56 UTC 2019
# kubectl exec -it emptydir-demo -c busybox tail /www/index.html
Thu Dec 05 04:24:56 UTC 2019
Thu Dec 05 04:25:06 UTC 2019
类似emptyDir,hostpath可以将本地node已存在的文件
或目录挂载到pod中,且不随pod周期,实现持久化存储或共享。但是如果pod发生跨主机重建,容器hostPath文件的内容可能变动。因此hostPath一般和DaemonSet搭配使用,如本节点容器日志收集。
DirectoryOrCreate: 如果给定路径上不存在任何内容,则将根据需要在其中创建一个空目录,并将权限设置为0755。该目录与Kubelet具有相同的组和所有权,即进程必须用root权限写入目录,或手动修改挂载目录权限。
kubectl explain pod.spec.volumes.hostPath
# 因为之前测试创建那个容器时间是UTC,所以这里顺便也用hostPath修改下
# cat hostPath-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-hostpath-demo
namespace: default
labels:
app: mynginx
tier: frontend
annotations:
bubble/createby: "cluster admin"
spec:
containers:
- name: nginx
image: nginx:1.17.5-alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- name: html
mountPath: /var/log/nginx
- name: localtime
mountPath: /etc/localtime
volumes:
- name: html
hostPath:
path: /data/pod/hostPath
type: DirectoryOrCreate
- name: localtime
hostPath:
path: /etc/localtime
type: File
访问测试,可以看到pod重建后hostpath内容没变
# kubectl exec -it pod-hostpath-demo tail /var/log/nginx/access.log
10.244.0.0 - - [05/Dec/2019:16:57:00 +0800] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
# kubectl replace -f pod-hostpath-demo.yaml --force
pod "pod-hostpath-demo" deleted
pod/pod-hostpath-demo replaced
# kubectl exec -it pod-hostpath-demo tail /var/log/nginx/access.log
10.244.0.0 - - [05/Dec/2019:16:57:00 +0800] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0" "-"
nfs是网络存储,可以直接作为存储卷使用,实现pod间数据共享。如果pod被删仅会自动卸载nfs,nfs中内容仍保留。
# cat nfs-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-demo
namespace: default
labels:
app: nfs
spec:
containers:
- name: nginx
image: nginx:1.17.5-alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
nfs:
path: /data/pod/nfs
server: node2.xlbubble.xyz
访问测试
# curl 10.244.1.66
nfs on node2.xlbubble.xyz
# pod启动检查:确保各node能访问nfs目录
pv(PersistentVolum)是集群中的存储资源,基于物理存储。其生命周期和pod无关。
pvc(PersistentVolumClaim)是基于pv抽象的存储请求。Pod通过PVC请求物理存储。
pv卷插件支持的物理存储(Kubernetes v1.13+):本地存储,iscsi,云硬盘,Vsphere存储等,具体看官网。
先创建类型为nfs的持久化存储卷,用于为pvc提供存储卷。
# 确认node2.xlbubble.xyz的nfs正常
[root@node2 ~]# exportfs -arv
exporting *:/data/pod/v4
exporting *:/data/pod/v3
exporting *:/data/pod/v2
exporting *:/data/pod/v1
exporting *:/data/pod/nfs
可以创建多个pv,并提供各种不同的功能,包括卷大小和IO方式,IO性能等。
# cat pv-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs: # 创建nfs类型存储卷
path: /data/pod/pv1 # nfs目录
server: node2.xlbubble.xyz # nfs server
accessModes: ["ReadWriteOnce"] # 仅单节点读写
capacity: # pv大小
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/pod/pv2
server: node2.xlbubble.xyz
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/pod/pv3
server: node2.xlbubble.xyz
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/pod/pv4
server: node2.xlbubble.xyz
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 20Gi
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO Retain Available 33s
pv002 5Gi RWO,RWX Retain Available 33s
pv003 10Gi RWO,RWX Retain Available 33s
pv004 20Gi RWO,RWX Retain Available 33s
# cat pvc-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: default
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 4Gi # 创建4G的pvc
# cat pvc-pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvc-demo
namespace: default
labels:
app: mynginx
tier: frontend
annotations:
bubble/createby: "cluster admin"
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
volumes:
- name: html
persistentVolumeClaim:
claimName: mypvc
目前pvc使用绑定了pv002号存储卷,这里是自动寻找并绑定符合要求的静态pv,如果没有符合要求的pv,pvc会一直处于pending状态,直到出现符合要求的pv才创建。
静态pv: 如上。
动态pv:如果没有符合要求的pv,会将多个pv捆绑在一块,提供存储。当pv集群仍未符合要求,则pvc会一直pending,直到pv集群符合要求。动态pv基于StorageClasses,需要有支持StorageClasses的卷插件。
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv002 5Gi RWO,RWX 11s
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO Retain Available 22s
pv002 5Gi RWO,RWX Retain Bound default/mypvc 22s
pv003 10Gi RWO,RWX Retain Available 22s
pv004 20Gi RWO,RWX Retain Available 22s
# curl 10.244.2.72
pv2 on node2.xlbubble.xyz
注意! 本次测试的IO模式:nfs(rw) --> pv(rw) --> pvc(rw)。在配置过程中,要确保每次配置的存储权限都符合要求。
删除pvc前k8s会先确认pvc是否有绑定pod,如果有绑定则不会立即删除,会一直等待pod那边主动解除绑定。同理,删除pv前也先确认是否有绑定pvc,如果有绑定则不会立即删除,会一直等待pvc那边主动解除绑定。Finalizers: [kubernetes.io/pvc-protection]
# kubectl describe pvc mypvc
Name: mypvc
Namespace: default
StorageClass:
Status: Bound
Volume: pv003
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mypvc","namespace":"default"},"spec":{"accessModes"...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
回收策略包括:保留,删除和回收(已弃用)。
静态pv默认使用保留策略:删除pvc或pv后,数据还是会保存在原来的物理存储上,如确认无需数据,可以去手动清理。
在删除pvc后pv处于Released状态,pv仍是有pvc声明的 default/mypvc (pvc和pv一对一映射),无法在重新绑定pvc。可以手动删除重建pv或修改pv声明。
kubectl delete -f pvc-demo.yaml
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 2Gi RWO Retain Available 93m
pv002 5Gi RWO,RWX Retain Released default/mypvc 93m
pv003 10Gi RWO,RWX Retain Available 93m
pv004 20Gi RWO,RWX Retain Available 93m
# kubectl edit pv
...
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: mypvc
namespace: default
resourceVersion: "3498958"
uid: 2a110677-d6d6-495e-8b95-5f7cbb49e104
...
动态pv默认使用删除策略:删除pvc或pv后,原来物理存储上的数据也会被删除。
官方建议通过kubectl patch pv YOURPVNAME -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
修改pv的回收策略。
To Be Continued
k8s-storage