PersistenVolume(PV,持久卷):对存储抽象实现,使得存储作为集群中的资源。
PersistenVolumeClaim(PVC,持久卷申请):PVC消费PV的资源
Pod申请PVC作为卷来使用,集群通过PVC查找绑定的PV,并Mount给Pod。
1、PV空间被释放时的处理机制指定为Recycle,空间回收,在删除PV后,会删除存储卷下所有文件。默认为机制Retain,保持不动,由管理员手动回收。
vim nfs-pv.yaml
[root@k8s-master-101 volume]# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
release: "nfs"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /opt/nfs/data
server: 10.0.0.31
2、spec里可以定义标签选择器来挑选要绑定的PV,例如这个pvc绑定上面pv的标签release: “nfs”,也可以匹配条件,比如accessModes storage
vim nfs-pvc.yaml
[root@k8s-master-101 volume]# cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc001
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
release: "nfs"
3、创建pv和pvc
[root@k8s-master-101 volume]# kubectl create -f nfs-pv.yaml
persistentvolume/nfs-pv created
[root@k8s-master-101 volume]# kubectl create -f nfs-pvc.yaml
persistentvolumeclaim/pvc001 created
4、查看pv和pvc,可以看到pvc001绑定了nfs-pv
[root@k8s-master-101 volume]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs-pv 5Gi RWX Recycle Bound default/pvc001 51s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc001 Bound nfs-pv 5Gi RWX 48s
5、创建容器进行测试,指定数据卷为刚才创建的pvc001
vim nfspvc-deployment.yaml
[root@k8s-master-101 volume]# cat nfspvc-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc001
6、创建并进入容器查看是否挂载成功,可以看到已经挂载上去了。事先确认一下nfs服务器被挂载的目录/opt/nfs/data下有没有index.html文件,没有就创建一个来测试。
[root@k8s-master-101 volume]# kubectl create -f nfs-deployment.yaml
deployment.extensions/nginx-deployment created
[root@k8s-master-101 volume]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-577c68c7f5-2bjxp 1/1 Running 0 19s
nginx-deployment-577c68c7f5-866fr 1/1 Running 0 19s
nginx-deployment-577c68c7f5-wg5bb 1/1 Running 0 19s
[root@k8s-master-101 volume]# kubectl exec -it nginx-deployment-577c68c7f5-2bjxp bash
root@nginx-deployment-577c68c7f5-2bjxp:/# cat /usr/share/nginx/html/index.html
nfs数据卷挂载成功!
7、使用了PV后,可以把nfs或glusterfs等网络文件系统事先创建PV数据卷,然后用pvc关联起来,在使用的时候直接指定pvc即可,这样可以省去指定网络存储IP和地址的麻烦。
1、创建一个和前面相似的给gluster用的pv,指定标签,然后指定glusterfs里的endpoints为前面创建的glusterfs-cluster,path数据卷为gv0
vim gluster-pv.yaml
[root@k8s-master-101 volume]# cat gluster-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
labels:
release: "gluster"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "gv0"
readOnly: false
2、vim gluster-pvc.yaml
[root@k8s-master-101 volume]# cat gluster-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc002
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
release: "gluster"
3、创建pv和pvc并查看
[root@k8s-master-101 volume]# kubectl create -f gluster-pv.yaml
persistentvolume/gluster-pv created
[root@k8s-master-101 volume]# kubectl create -f gluster-pvc.yaml
persistentvolumeclaim/pvc002 created
[root@k8s-master-101 volume]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/gluster-pv 5Gi RWX Retain Bound default/pvc002 12s
persistentvolume/nfs-pv 5Gi RWX Recycle Bound default/pvc001 20m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc001 Bound nfs-pv 5Gi RWX 20m
persistentvolumeclaim/pvc002 Bound gluster-pv 5Gi RWX 9s
4、创建一个glusterfs的deployment来测试,内容和前面nfs的pod一样,只需要把数据卷改成pvc002就可以了
vim glusterpvc-deployment.yaml
[root@k8s-master-101 volume]# cat glusterpvc-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: wwwroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: wwwroot
persistentVolumeClaim:
claimName: pvc002
5、删除前面的nfs创建的deployment,然后重新创建一个来测试glusterfs,进入容器查看
[root@k8s-master-101 volume]# kubectl delete -f nfs-deployment.yaml
deployment.extensions "nginx-deployment" deleted
[root@k8s-master-101 volume]# kubectl create -f glusterpvc-deployment.yaml
deployment.extensions/nginx-deployment created
[root@k8s-master-101 volume]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-f996c7445-fn6w4 1/1 Running 0 66s
nginx-deployment-f996c7445-mcgjm 1/1 Running 0 66s
nginx-deployment-f996c7445-zt24m 1/1 Running 0 66s
[root@k8s-master-101 volume]# kubectl exec -it nginx-deployment-f996c7445-fn6w4 bash
root@nginx-deployment-f996c7445-fn6w4:/# cat /usr/share/nginx/html/index.html
glusterfs挂载成功!