Container 中的文件在磁盘上是临时存放的,这给 Container 中运行的较重要的应用程序带来一些问题:
因此 Kubernetes 使用了卷(Volume) 这一抽象概念能够来解决这两个问题。Kubernetes 支持下列类型的卷:
以上只是列举了其中一小部分支持的卷,具体可以查看: https://kubernetes.io/zh/docs/concepts/storage/volumes/#persistentvolumeclaim
本文主要介绍持久卷的使用。Kubernetes 为了使开发人员能够在请求存储资源时,避免处理存储设施细节,引入了持久卷(PersistentVolume,PV) 和 持久卷申领(PersistentVolumeClaim,PVC):
创建 PV 有两种方式:
访问模式有:
在命令行接口(CLI)中,访问模式也使用以下缩写形式:
每个卷只能同一时刻只能以一种访问模式挂载,即使该卷能够支持 多种访问模式。例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式挂载。
volumeBindingMode 字段控制了 PVC 和 PV 在什么时候进行绑定。
静态供应需要管理员手动创建 PV,然后创建 PVC 绑定 PV,最后创建 Pod 声明使用 PVC。
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual #静态供应,名字可以任意取
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data" #在创建pod的节点上会新建该目录
fapiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual #storageClassName要和PV中的一致
accessModes:
- ReadWriteOnce #accessMode要和PV中的一致
resources:
requests:
storage: 3Gi #申请3G容量,申请就近原则,如果有一个10G的和一个20G的PV满足要求,那么使用10G的PV
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim #使用的PVC的名字
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html" #容器中挂载的目录
name: task-pv-storage
在宿主机上的目录创建一个文件:
root@worker01:~# cd /mnt/data/
root@worker01:/mnt/data# echo "volume nginx" > index.html
尝试访问 Pod 的服务,可以看到 nginx 的 index.html 文件已经被修改:
root@master01:~/yaml/volume# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
task-pv-pod 1/1 Running 0 11m 192.168.5.17 worker01 <none> <none>
root@master01:~/yaml/volume# curl 192.168.5.17
volume nginx
删除要按照 Pod–>PVC–>PV 的顺序删除,如果先删除了 PVC 会等 Pod 删除掉,才会删除 PVC ,如果先删除了 PV,会等 pod 和 PVC 删除了才会删除 PV。
动态卷供应允许按需创建存储卷。 如果没有动态供应,集群管理员必须手动地联系他们的云或存储提供商来创建新的存储卷, 然后在 Kubernetes 集群创建 PersistentVolume 对象来表示这些卷。 动态供应功能消除了集群管理员预先配置存储的需要。 相反,它在用户请求时自动供应存储。
root@master01:/# apt-get -y install nfs-kernel-server
root@master01:/# systemctl enable nfs-kernel-server.service && systemctl restart nfs-kernel-server.service
在 worker 节点安装NFS客户端:
root@worker01:~# apt-get -y install nfs-common
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
动态卷供应的实现基于 storage.k8s.io API 组中的 StorageClass API 对象。 集群管理员可以根据需要定义多个 StorageClass 对象,每个对象指定一个存储插件(又名 provisioner),存储插件以 Pod 的形式存在于 Kubernetes 集群中:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
# 指定标识插件的值
- name: PROVISIONER_NAME
value: fuseim.pri/ifs #匹配StorageClass的provisioner
- name: NFS_SERVER
value: 10.0.1.31 #NFS服务器的ip地址
- name: NFS_PATH
value: /storage #NFS服务器的路径
volumes:
- name: nfs-client-root
nfs:
server: 10.0.1.31 #NFS服务器的ip地址
path: /storage #NFS服务器的路径
StorageClass 声明存储插件,用于自动创建 PV,provisioner 参数和存储插件的标识对应上才能动态供应卷 :
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs #要匹配nfs deployment env PROVISIONER_NAME的值,默认不支持nfs存储需要添加插件标识
parameters:
archiveOnDelete: "false"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pv-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: managed-nfs-storage
resources:
requests:
storage: 1Gi
查看创建的 PVC 和 PV,可以看到我们只创建了 PVC,PV 是存储插件自动配置的
root@master01:~/yaml/storageClass# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pv-storage Bound pvc-e52ac960-182a-4065-a6e8-6957f5c93b8a 1Gi RWX managed-nfs-storage 3s
root@master01:~/yaml/storageClass# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-e52ac960-182a-4065-a6e8-6957f5c93b8a 1Gi RWX Delete Bound default/nginx-test managed-nfs-storage 11s
Pod 使用 PVC 申请 Volume:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pv-pod
spec:
volumes:
- name: nginx-pv-storage
persistentVolumeClaim:
claimName: nginx-test
containers:
- name: nginx-pv-container
image: nginx
ports:
- containerPort: 80
name: "nginx-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-pv-storage
除了上面创建 PVC 自动创建 PV,然后 Pod 再声明使用 PVC 的方式以外,还有一个更简便的方法,就是使用 volumeClaimTemplates 直接指定 StorageClass 和 申请存储的大小,动态创建 PVC 和 PV:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: nginx-disk-ssd
mountPath: /data
volumeClaimTemplates:
- metadata:
name: nginx-disk-ssd
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-nfs-storage" #storageClass的名字
resources:
requests:
storage: 10Gi