目录
PV&PVC基本使用
StorageClass&Provisioner
PVC的扩容
Kubernetes中PV和PVC的概念就不在介绍了
PV和PVC的使用方式是
管理员先创建一系列各种大小的PV,形成集群中的存储资源池
用户创建pvc时需要指定大小,这个时候会从pv池中寻找合适的大小并将pv绑定到pvc上使用
举例说明kubernetes通过pv/pvc使用glusterfs:
1. 创建glusterfs存储卷,假设已经创建好,我们使用GlusterFS上的afr-volume这个volume。
[root@k8s-node1 json]# gluster volume info afr-volume
Volume Name: afr-volume
Type: Replicate
Volume ID: 94fddfe0-123d-4db0-b55b-a49ac2486b21
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: k8s-node1:/opt/afr_data
Brick2: k8s-node2:/opt/afr_data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
2. kubernetes中创建glusterfs的endpoint
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-endpoint
namespace: kube-system
subsets:
- addresses:
- ip: 172.16.9.201
- ip: 172.16.9.202
- ip: 172.16.9.203
ports:
- port: 1
protocol: TCP
3. kubernetes中创建glusterfs service(创建与Endpoint对应的Service,Service的名称必须和Endpoint一致)
apiVersion: v1
kind: Service
metadata:
name: glusterfs-service
namespace: kube-system
spec:
ports:
- port: 1
protocol: TCP
targetPort: 1
type: ClusterIP
4. 创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-dev-volume
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: "glusterfs-cluster"
path: "afr-volume"
readOnly: false
5. 创建pvc
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-nginx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
查看pv和pvc可以看到可PV已经由Availiable变成了Bound状态。
但是实际情况下,管理员并不清楚用户需要什么样大小的存储卷,也没有办法在预先创建各种大小的PV
最好的效果是用户创建指定大小的pvc,则就自动创建同样大小的pv并关联用户的pvc
Kubernetes通过创建StorageClass来使用 Dynamic Provisioning 特性,storageclass需要有一个provisioner来决定使用什么样的存储插件来动态创建pvc,比如是glusterfs存储,cephfs存储等等
kubernetes中有很多内置的provisioner,参考如下:
https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner
kubernetes中内置了glusterfs的provisioner:kubernetes.io/glusterfs
glusterfs的provisioner是通过heketi提供的restapi来动态创建pvc的,storageclass如下:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: replicate-2
parameters:
clusterid: 888490f4545c9a4cc896a6f7485f0362
resturl: http://192.168.1.1:8087
volumetype: replicate:2
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
使用storageclass创建pvc就不需要在按部就班的创建 endpoint - service - pv - pvc了
直接创建pvc指定大小即可
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testpvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: replicate-2
可以看到 provisioner自动创建并关联了相关的资源
[root@k8s-node1 ~]# kubectl --namespace=kube-system get pvc aistack-influxdb -oyaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: "2019-07-02T06:38:13Z"
finalizers:
- kubernetes.io/pvc-protection
name: testpvc
namespace: kube-system
resourceVersion: "3192"
selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/testpvc
uid: f7392a88-9c93-11e9-aa77-88d7f6ae9c94
spec:
accessModes:
- ReadWriteMany
dataSource: null
resources:
requests:
storage: 20Gi
storageClassName: replicate-2
volumeMode: Filesystem
volumeName: pvc-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
status:
accessModes:
- ReadWriteMany
capacity:
storage: 20Gi
phase: Bound
[root@k8s-node1 ~]# kubectl --namespace=kube-system get pv pvc-f7392a88-9c93-11e9-aa77-88d7f6ae9c94 -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
Description: 'Gluster-Internal: Dynamically provisioned PV'
gluster.kubernetes.io/heketi-volume-id: 65314d1b4c5136fef92b7ef1a3725a57
gluster.org/type: file
kubernetes.io/createdby: heketi-dynamic-provisioner
pv.beta.kubernetes.io/gid: "2001"
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
creationTimestamp: "2019-07-02T06:38:25Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
resourceVersion: "3196"
selfLink: /api/v1/persistentvolumes/pvc-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
uid: fe963cc4-9c93-11e9-aa77-88d7f6ae9c94
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 20Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: testpvc
namespace: kube-system
resourceVersion: "3168"
uid: f7392a88-9c93-11e9-aa77-88d7f6ae9c94
glusterfs:
endpoints: glusterfs-dynamic-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
endpointsNamespace: kube-system
path: vol_65314d1b4c5136fef92b7ef1a3725a57
persistentVolumeReclaimPolicy: Delete
storageClassName: replicate-2
volumeMode: Filesystem
status:
phase: Bound
[root@k8s-node1 ~]# kubectl --namespace=kube-system get endpoints glusterfs-dynamic-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
NAME ENDPOINTS AGE
glusterfs-dynamic-f7392a88-9c93-11e9-aa77-88d7f6ae9c94 172.16.2.100:1,172.16.2.200:1 6d
[root@k8s-node1 ~]# kubectl --namespace=kube-system get services glusterfs-dynamic-f7392a88-9c93-11e9-aa77-88d7f6ae9c94
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-dynamic-f7392a88-9c93-11e9-aa77-88d7f6ae9c94 ClusterIP 10.10.159.255 1/TCP 6d
在kubernetes 1.11版本中开始支持pvc创建后的扩容
storageclass的创建
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs-sc
parameters:
clusterid: 075e35a0ce70274b3ba7f158e77edb2c
resturl: http://172.16.9.201:8087
volumetype: replicate:3
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true # 重要
可以看到storageclass增加了allowVolumeExpansion字段
先测试下不带该字段时的pvc扩容情况
先创建一个1G的pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
name: testpvc
spec:
accessModes:
- ReadWriteMany
dataSource: null
resources:
requests:
storage: 1Gi
storageClassName: replicate-3
volumeMode: Filesystem
kubectl create -f pvc.yaml
persistentvolumeclaim/testpvc created
将该PVC绑定一个Pod
[root@k8s-node1 test]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
- mountPath: /root/testpvc
name: testpvc
volumes:
- name: testpvc
persistentVolumeClaim:
claimName: testpvc
[root@k8s-node1 test]# kubectl create -f pod.yaml
pod/testpod created
进入容器查看PVC大小,可以看到挂载的目录/root/testpvc确实是1G左右
[root@k8s-node1 test]# kubectl exec -it testpod bash
[root@testpod /]# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.1T 29G 1.1T 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 32G 0 32G 0% /sys/fs/cgroup
172.16.2.100:vol_088693919fcae3397926ad54f7d77815 1020M 33M 987M 4% /root/testpvc
/dev/mapper/k8s-kubelet 704G 33M 704G 1% /etc/hosts
/dev/mapper/k8s-docker 1.1T 29G 1.1T 3% /etc/hostname
shm 64M 0 64M 0% /dev/shm
tmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 32G 0 32G 0% /proc/acpi
tmpfs 32G 0 32G 0% /proc/scsi
tmpfs 32G 0 32G 0% /sys/firmware
修改pvc,将大小修改为2G
[root@k8s-node1 test]# kubectl edit pvc testpvc
...
error: persistentvolumeclaims "testpvc" could not be patched: persistentvolumeclaims "testpvc" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
You can run `kubectl replace -f /tmp/kubectl-edit-vqpx5.yaml` to try this update again.
直接报错,显示为无法修改,只有动态pvc能resize,并且provision要能支持resize
当前环境使用的是glusterfs,修改storageclass,添加allowVolumeExpansion: true
[root@k8s-node1 test]# kubectl --namespace=kube-system edit storageclasses.storage.k8s.io replicate-2
storageclass.storage.k8s.io/replicate-2 edited
[root@k8s-node1 test]# kubectl --namespace=kube-system get storageclasses.storage.k8s.io replicate-2 -oyaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2019-07-02T06:08:00Z"
name: replicate-2
resourceVersion: "1261960"
selfLink: /apis/storage.k8s.io/v1/storageclasses/replicate-2
uid: bee4b8f2-9c8f-11e9-aa77-88d7f6ae9c94
parameters:
clusterid: 888490f4545c9a4cc896a6f7485f0362
resturl: http://172.16.2.100:8087
volumetype: replicate:2
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
volumeBindingMode: Immediate
可以看到storageclass中已经添加了allowVolumeExpansion: true字段,再次修改pvc,将1G改成2G,这次没有报错
[root@k8s-node1 test]# kubectl edit pvc testpvc
persistentvolumeclaim/testpvc edited
查看pvc可以看到大小已经变成了2G
[root@k8s-node1 test]# kubectl get pvc testpvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
testpvc Bound pvc-6880b78a-a450-11e9-aa77-88d7f6ae9c94 2Gi RWX replicate-2 13m
进入到容器中查看实际的大小,可以看到/root/testpvc目录也变成了2G
[root@testpod /]# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.1T 29G 1.1T 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 32G 0 32G 0% /sys/fs/cgroup
172.16.2.100:vol_088693919fcae3397926ad54f7d77815 2.0G 66M 2.0G 4% /root/testpvc
/dev/mapper/k8s-kubelet 704G 33M 704G 1% /etc/hosts
/dev/mapper/k8s-docker 1.1T 29G 1.1T 3% /etc/hostname
shm 64M 0 64M 0% /dev/shm
tmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 32G 0 32G 0% /proc/acpi
tmpfs 32G 0 32G 0% /proc/scsi
tmpfs 32G 0 32G 0% /sys/firmware