k8s PersistentVolume笔记

原因

在业务中需要使用kubernetes.Clientset创建容器,创建时需要对容器进行绑定PersistentVolume所以要对其进行一番了解

参考文档

配置 Pod 以使用 PersistentVolume 作为存储
Binding Persistent Volumes by Labels
改变默认 StorageClass

测试方式

完成k8s文档中的测试用例

done.

根据当前项目测试

问题1: storageClassName是否必须存在

根据Binding Persistent Volumes by Labels文档中表明PersistentVolumeClaim可以使用

selector: 
    matchLabels:
      storage-tier: gold
      aws-availability-zone: us-east-1

来匹配对应的PersistentVolume而不用指定StorageClassName
我将三个yaml复制到此处
glusterfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-volume
  labels: 
    storage-tier: gold
    aws-availability-zone: us-east-1
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: glusterfs-cluster 
    path: myVol1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain

glusterfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gluster-claim
spec:
  accessModes:
  - ReadWriteMany
  resources:
     requests:
       storage: 1Gi
  selector: 
    matchLabels:
      storage-tier: gold
      aws-availability-zone: us-east-1

Volume Endpoints
glusterfs-ep.yaml

  apiVersion: v1
  kind: Endpoints
  metadata:
    name: glusterfs-cluster
  subsets:
    - addresses:
        - ip: 192.168.122.221
      ports:
        - port: 1
    - addresses:
        - ip: 192.168.122.222
      ports:
        - port: 1

实际测试时,获取版本

qiantao@qiant k8s % kubectl version -o yaml
clientVersion:
  buildDate: "2020-02-11T18:14:22Z"
  compiler: gc
  gitCommit: 06ad960bfd03b39c8310aaf92d1e7c12ce618213
  gitTreeState: clean
  gitVersion: v1.17.3
  goVersion: go1.13.6
  major: "1"
  minor: "17"
  platform: darwin/amd64
serverVersion:
  buildDate: "2020-01-15T08:18:29Z"
  compiler: gc
  gitCommit: e7f962ba86f4ce7033828210ca3556393c377bcc
  gitTreeState: clean
  gitVersion: v1.16.6-beta.0
  goVersion: go1.13.5
  major: "1"
  minor: 16+
  platform: linux/amd64

获取endpoints

qiantao@qiant k8s % kubectl get endpoints
NAME                ENDPOINTS                             AGE
glusterfs-cluster   192.168.122.221:1,192.168.122.222:1   4m32s
kubernetes          192.168.65.3:6443                     12d

获取pvc信息

qiantao@qiant k8s % kubectl get pvc gluster-claim -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
  creationTimestamp: "2020-07-13T12:05:42Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: gluster-claim
  namespace: default
  resourceVersion: "201203"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gluster-claim
  uid: 977b792d-fc2f-440c-9746-b4670250a239
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      aws-availability-zone: us-east-1
      storage-tier: gold
  storageClassName: hostpath
  volumeMode: Filesystem
  volumeName: pvc-977b792d-fc2f-440c-9746-b4670250a239
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  phase: Bound

发现对应的PersistentVolume居然是pvc-977b792d-fc2f-440c-9746-b4670250a239,发生了什么???

获取一下我们设置的gluster-volumepvc-977b792d-fc2f-440c-9746-b4670250a239

qiantao@qiant k8s % kubectl get pv pvc-977b792d-fc2f-440c-9746-b4670250a239 gluster-volume -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      docker.io/hostpath: /var/lib/k8s-pvs/gluster-claim/pvc-977b792d-fc2f-440c-9746-b4670250a239
      pv.kubernetes.io/provisioned-by: docker.io/hostpath
    creationTimestamp: "2020-07-13T12:05:42Z"
    finalizers:
    - kubernetes.io/pv-protection
    name: pvc-977b792d-fc2f-440c-9746-b4670250a239
    resourceVersion: "201200"
    selfLink: /api/v1/persistentvolumes/pvc-977b792d-fc2f-440c-9746-b4670250a239
    uid: 1269e814-3d66-4757-b379-c1d65b6bd178
  spec:
    accessModes:
    - ReadWriteMany
    capacity:
      storage: 1Gi
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: gluster-claim
      namespace: default
      resourceVersion: "201195"
      uid: 977b792d-fc2f-440c-9746-b4670250a239
    hostPath:
      path: /var/lib/k8s-pvs/gluster-claim/pvc-977b792d-fc2f-440c-9746-b4670250a239
      type: ""
    persistentVolumeReclaimPolicy: Delete
    storageClassName: hostpath
    volumeMode: Filesystem
  status:
    phase: Bound
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    creationTimestamp: "2020-07-13T12:05:28Z"
    finalizers:
    - kubernetes.io/pv-protection
    labels:
      aws-availability-zone: us-east-1
      storage-tier: gold
    name: gluster-volume
    resourceVersion: "201169"
    selfLink: /api/v1/persistentvolumes/gluster-volume
    uid: 12f08ccd-454c-46d0-b2dd-cf5f774b9266
  spec:
    accessModes:
    - ReadWriteMany
    capacity:
      storage: 2Gi
    glusterfs:
      endpoints: glusterfs-cluster
      path: myVol1
    persistentVolumeReclaimPolicy: Retain
    volumeMode: Filesystem
  status:
    phase: Available
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

根据输出的 claimRef来看,我们设置的PersistentVolume没有被使用反而自行设置了一个,为什么?

qiantao@qiant k8s %  kubectl get storageclass

NAME                 PROVISIONER          AGE
hostpath (default)   docker.io/hostpath   12d

获取默认的storageclass发现我们有个默认的hostpath

重点就是,如果我们不设置storageclass那么k8s会使用默认的storageclass来创建我们需要的PersistentVolume

如何修改默认的storageclass参考https://k8smeetup.github.io/docs/tasks/administer-cluster/change-default-storage-class/

关于StorageClass的详细介绍可以参考此处https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

官方文档说明如下

默认行为

可以在群集上启用动态卷供应,以便在未指定存储类的情况下动态设置所有声明。 集群管理员可以通过以下方式启用此行为:

  • 标记一个 StorageClass默认
  • 确保 DefaultStorageClass 准入控制器在 API 服务端被启用。

管理员可以通过向其添加 storageclass.kubernetes.io/is-default-class 注解来将特定的 StorageClass 标记为默认。 当集群中存在默认的 StorageClass 并且用户创建了一个未指定 storageClassNamePersistentVolumeClaim 时, DefaultStorageClass 准入控制器会自动向其中添加指向默认存储类的 storageClassName 字段。

请注意,群集上最多只能有一个 默认 存储类,否则无法创建没有明确指定 storageClassNamePersistentVolumeClaim

2.PV在Retain策略Released状态下重新分配到PVC恢复数据

参考文档 https://blog.51cto.com/ygqygq2/2308576
问题:
传出PVC之后出现PV还在还处于占中状态,无法重新分配
示例如下:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: chenjunhao1-10-26-133-27-pv-nfs-data
  selfLink: /api/v1/persistentvolumes/chenjunhao1-10-26-133-27-pv-nfs-data
  uid: 75e64e9d-94f6-4bc0-baea-9b89b5fcc7fb
  resourceVersion: '208095'
  creationTimestamp: '2020-07-10T08:48:10Z'
  labels:
    pv: chenjunhao1-10-26-133-27-pv-nfs-data
  annotations:
    pv.kubernetes.io/bound-by-controller: 'yes'
  finalizers:
    - kubernetes.io/pv-protection
spec:
  capacity:
    storage: 200Gi
  nfs:
    server: 10.26.133.27
    path: /home/chenjunhao1
  accessModes:
    - ReadWriteMany
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: chenjunhao1-10-26-133-27-pvc-nfs-data
    uid: b1be9f21-3fec-49c5-a803-94a30783be86
    apiVersion: v1
    resourceVersion: '123603'
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-data
  volumeMode: Filesystem
status:
  phase: Released

但是对应的PVC已经删除,
这时候我们可以放心大胆的删除spec. claimRef字段重新 kubectl apply -f
就可以得到

kind: PersistentVolume
apiVersion: v1
metadata:
  name: chenjunhao1-10-26-133-27-pv-nfs-data
  selfLink: /api/v1/persistentvolumes/chenjunhao1-10-26-133-27-pv-nfs-data
  uid: 75e64e9d-94f6-4bc0-baea-9b89b5fcc7fb
  resourceVersion: '208224'
  creationTimestamp: '2020-07-10T08:48:10Z'
  labels:
    pv: chenjunhao1-10-26-133-27-pv-nfs-data
  annotations:
    pv.kubernetes.io/bound-by-controller: 'yes'
  finalizers:
    - kubernetes.io/pv-protection
spec:
  capacity:
    storage: 200Gi
  nfs:
    server: 10.26.133.27
    path: /home/chenjunhao1
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-data
  volumeMode: Filesystem
status:
  phase: Available

故障查询说明

https://kubernetes.io/zh/docs/tasks/debug-application-cluster/debug-application/

命令路径

https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

你可能感兴趣的:(k8s PersistentVolume笔记)