[Kubernetes] Volumes

Volumes | Kubernetes

存储卷PersistentVolume | Kuboard

一.Volumes

容器中的磁盘文件是短暂的,这给在容器中运行的非平凡应用程序带来了一些问题。一个问题是容器崩溃时文件丢失。kubelet 重新启动容器,但状态为干净。第二个问题发生在一起运行的容器之间共享文件时Pod。Kubernetes体积抽象解决了这两个问题。建议熟悉Pod。

1.背景

Docker 有一个 volumes的概念,尽管它有点松散且管理较少。Docker 卷是磁盘或另一个容器中的目录。Docker 提供卷驱动程序,但功能有限。

Kubernetes 支持多种类型的卷。一个pod可以同时使用任意数量的卷类型。临时卷类型具有 pod 的生命周期,但持久卷存在于 pod 的生命周期之外。当 Pod 不复存在时,Kubernetes 会销毁临时卷;但是,Kubernetes 不会破坏持久卷。对于给定 pod 中的任何类型的卷,数据会在容器重新启动时保留。

卷的核心是一个目录,其中可能包含一些数据,Pod 中的容器可以访问该目录。该目录是如何形成的、支持它的介质以及它的内容是由所使用的特定卷类型决定的。

要使用卷,请指定为 Pod 提供的卷,.spec.volumes 并声明将这些卷挂载到容器中的位置.spec.containers[*].volumeMounts。容器中的进程看到由初始内容组成的文件系统视图container image,加上安装在容器内的卷(如果已定义)。该进程会看到一个根文件系统,该文件系统最初与容器映像的内容相匹配。如果允许,对该文件系统层次结构中的任何写入都会影响该进程在执行后续文件系统访问时查看的内容。卷安装在映像中的specified paths对于 Pod 中定义的每个容器,您必须独立指定容器使用的每个卷的挂载位置。

卷不能安装在其他卷中(但有关相关机制,请参阅使用 subPath )。此外,一个卷不能包含指向不同卷中任何内容的硬链接

2.卷的类型Types of Volumes

Kubernetes 支持多种类型的卷

我这里介绍一些简单的AWS系列不介绍,亚马逊存储没有。

2.1 cephfs

注意:您必须拥有自己的 Ceph 服务器并运行导出的共享,然后才能使用它。

有关详细信息,请参阅CephFS 示例

2.2 configMap

ConfigMap提供了一种 将配置数据注入 pod 的方法。存储在 ConfigMap 中的数据可以在类型卷中引用, configMap然后由运行在 pod 中的容器化应用程序使用。

引用 ConfigMap 时,您需要在卷中提供 ConfigMap 的名称。您可以自定义用于 ConfigMap 中特定条目的路径。以下配置显示了如何将log-configConfigMap 挂载到名为 的 Pod 上configmap-pod

log-configConfigMap 被挂载为一个卷,其条目中存储的所有内容都log_level被挂载到 Pod 的 path/etc/config/log_level中。请注意,此路径源自卷的mountPathpath 以log_level.

注意:

  • 您必须先创建一个ConfigMap ,然后才能使用它。

  • 使用 ConfigMap 作为subPath卷挂载的容器将不会收到 ConfigMap 更新。

  • 文本数据使用 UTF-8 字符编码作为文件公开。对于其他字符编码,请使用binaryData.

可以参考k8s 向容器提供配置 变量信息总结configmap

2.3 downwardAPI

有两种方法可以将 Pod 和 Container 字段暴露给正在运行的 Containe

  • Environment variables[环境变量]
  • Volume Files[卷文件]

这两种公开 Pod 和 Container 字段的方式一起称为 Downward AP Downward API

2.4 emptyDir

当 Pod 分配给节点时,首先会创建一个emptyDir卷,并且只要该 Pod 在该节点上运行,它就存在。顾名思义,该 emptyDir卷最初是空的。Pod 中的所有容器都可以读取和写入卷中的相同文件emptyDir,尽管该卷可以安装在每个容器中相同或不同的路径上。当 Pod 出于任何原因从节点中删除时,其中的数据将emptyDir被永久删除。

注意:容器崩溃不会节点中删除 Pod。卷中的数据在emptyDir容器崩溃时是安全的。

emptyDir的一些用途emptyDir是:

  • 暂存空间,例如用于基于磁盘的合并排序
  • 检查点长时间计算以从崩溃中恢复
  • 保存内容管理器容器在网络服务器容器提供数据时获取的文件

根据您的环境,emptyDir卷存储在支持节点的任何介质上,例如磁盘或 SSD,或网络存储。但是,如果您将该emptyDir.medium字段设置为"Memory",Kubernetes 会为您安装一个 tmpfs(RAM 支持的文件系统)。虽然 tmpfs 非常快,但请注意,与磁盘不同,tmpfs 会在节点重新启动时被清除,并且您写入的任何文件都将计入容器的内存限制。

注意:如果启用了SizeMemoryBackedVolumes 功能门,您可以指定内存支持卷的大小。如果未指定大小,则内存支持卷的大小为 Linux 主机上内存的 50%。

emptyDir 配置示例

[root@localhost configmap]# cat busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels: {name: busybox}
  template:
    metadata:
      name: busybox
      labels: {name: busybox}
    spec:
      containers:
      - name: busybox
        image: docker.io/library/busybox:1.28.4
        command: [ "httpd" ]
        args: [ "-f" ]
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
      volumes:
      - name: cache-volume
        #emptyDir: {}  #就是磁盘挂载了,不是tempfs
        emptyDir:
          medium: Memory   #tempfs(基于内存的文件系统)
[root@localhost configmap]# kubectl exec -it   pod/busybox-546b67d9cc-ghzpt  -- sh

tmpfs on /cache type tmpfs (rw,relatime,size=8009840k)

2.5 glusterfs

glusterfs允许将Glusterfs(一种开源网络文件系统)卷挂载到您的 Pod 中。与 emptyDir删除 Pod 时删除的 不同,glusterfs卷的内容被保留并且卷只是被卸载。这意味着 glusterfs 卷可以预先填充数据,并且可以在 pod 之间共享数据。GlusterFS 可以同时被多个写入者挂载。

注意:您必须先运行自己的 GlusterFS 安装,然后才能使用它。

有关更多详细信息,请参阅GlusterFS 示例

2.6 hostPath  pod每次要用固定某台node上

警告:

HostPath 卷存在许多安全风险,最好的做法是尽可能避免使用 HostPath。当必须使用 HostPath 卷时,它的范围应仅限于所需的文件或目录,并以只读方式挂载。

如果通过 AdmissionPolicy 限制 HostPath 对特定目录的访问,volumeMounts必须要求使用readOnly挂载才能使策略生效。

hostPath将文件或目录从主机节点的文件系统安装到您的 Pod 中。这不是大多数 Pod 需要的东西,但它为某些应用程序提供了强大的逃生舱口。

例如, 一些用途hostPath

  • 运行需要访问 Docker 内部的容器;使用hostPath 一个/var/lib/docker
  • 在容器中运行 cAdvisor;使用hostPath一个/sys
  • 允许 Pod 在 Pod 运行之前指定给定是否hostPath应该存在,是否应该创建,以及应该以什么形式存在

除了必需的path属性之外,您还可以选择typehostPath卷指定 。

支持的字段值为type

Value Behavior
空字符串(默认)是为了向后兼容,这意味着在挂载 hostPath 卷之前不会执行任何检查。
DirectoryOrCreate 如果给定路径不存在任何内容,则会根据需要在此处创建一个空目录,权限设置为 0755,与 Kubelet 具有相同的组和所有权。
Directory 目录必须存在于给定路径
FileOrCreate 如果给定路径不存在任何内容,则会根据需要在此处创建一个空文件,权限设置为 0644,与 Kubelet 具有相同的组和所有权。
File 文件必须存在于给定路径
Socket 给定路径中必须存在 UNIX 套接字
CharDevice 字符设备必须存在于给定路径中
BlockDevice 块设备必须存在于给定路径

使用这种类型的音量时要小心,因为:

  • HostPaths 可以暴露特权系统凭据(例如 Kubelet)或特权 API(例如容器运行时套接字),可用于容器逃逸或攻击集群的其他部分。
  • 由于节点上的文件不同,具有相同配置(例如从 PodTemplate 创建)的 Pod 在不同节点上的行为可能不同
  • 在底层主机上创建的文件或目录只能由 root 写入。您要么需要在 特权容器中以 root 身份运行进程,要么修改主机上的文件权限以便能够写入hostPath

hostPath 配置示例

[root@localhost configmap]# cat busybox.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
  labels: {name: busybox}
spec:
  replicas: 1
  selector:
    matchLabels: {name: busybox}
  template:
    metadata:
      name: busybox
      labels: {name: busybox}
    spec:
      containers:
      - name: busybox-mysql
        image: docker.io/library/busybox:1.28.4
        command: [ "httpd" ]
        args: [ "-f" ]
        volumeMounts:
        - name: es-config-dir
          mountPath: /etc/wubo
        - name: es-config-file
          mountPath: /etc/wubo/a.txt
      volumes:
      - name: es-config-dir
        hostPath:
          path: /opt/wubo
          type: DirectoryOrCreate
      - name: es-config-file
        hostPath:
          path: /opt/wubo/a.txt
          type: FileOrCreate

注意:FileOrCreate模式不会创建文件的父目录。如果挂载文件的父目录不存在,则 Pod 启动失败。为确保此模式有效,您可以尝试分别挂载目录和文件

2.7 iSCSI 块存储协议  开免费源产品longhorn     Longhorn | Documentation

        卷允许将现有 iSCSI(基于iscsiIP 的 SCSI)卷挂载到您的 Pod 中。与emptyDir删除 Pod 时删除的不同,iscsi卷的内容被保留并且卷只是被卸载。这意味着 iSCSI 卷可以预先填充数据,并且可以在 pod 之间共享数据。

注意:您必须让您自己的 iSCSI 服务器与所创建的卷一起运行,然后才能使用它。

iSCSI 的一个特点是它可以同时被多个消费者以只读方式安装。这意味着您可以使用数据集预先填充卷,然后根据需要从尽可能多的 Pod 并行提供它。不幸的是,iSCSI 卷只能由单个使用者以读写模式安装。不允许同时作家。

有关详细信息,请参阅iSCSI 示例。

2.8 local  pod每次不用固定某台node上,因为用了nodeAffinity 调度到node上

volume  local表示已安装的本地存储设备,例如磁盘、分区或目录。

本地卷只能用作静态创建的 PersistentVolume。不支持动态配置。

 与hostPath卷相比,localvolume  以持久且可移植的方式使用,无需手动将 Pod 调度到节点。系统通过查看 PersistentVolume 上的节点亲和性来了解卷的节点约束。

hostPath:当pods对磁盘性能要求比较高 不允许网络抖动的情况

[Kubernetes] Volumes_第1张图片

 local pv:当数据不能根据pod调度时 让pod根据数据调度。让pod创建的时候 创建到原来的节点上。

[Kubernetes] Volumes_第2张图片

 [Kubernetes] Volumes_第3张图片

但是,local卷取决于底层节点的可用性,并不适合所有应用程序。如果一个节点变得不健康,那么localPod 将无法访问该卷。使用此卷的 pod 无法运行。使用local卷的应用程序必须能够容忍这种可用性降低以及潜在的数据丢失,具体取决于底层磁盘的持久性特征。

以下示例显示了使用local卷和 的 PersistentVolume nodeAffinity

步骤1:PersistentVolumevolumeMode可以设置为“Block”或Filesystem”

              Block:需要先将磁盘通过fdisk 格式化

              Filesystem:就是一个目录即可

步骤2:StorageClass:local-storage

[root@localhost local]# cat local-storage.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"  ## 是否设置为默认的storageclass
provisioner: kubernetes.io/no-provisioner
parameters:
  archiveOnDelete: "true"                                 ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
  - hard                                                  ## 指定为硬挂载方式
  - nfsvers=4                                             ## 指定NFS版本,这个需要根据NFS Server版本号设置
volumeBindingMode: WaitForFirstConsumer #Immediate WaitForFirstConsumer:设置延迟绑定
reclaimPolicy: Retain #Retain:删除pvc时保留pv 需要手动删除pv ;Delete 都删除了
allowVolumeExpansion: true   #增加该字段表示允许动态扩容 #true false

本地卷目前不支持动态配置,但仍应创建 StorageClass 以延迟卷绑定,直到 Pod 调度。这是由WaitForFirstConsumer卷绑定模式指定的。

延迟卷绑定允许调度程序在为 PersistentVolumeClaim 选择适当的 PersistentVolume 时考虑 Pod 的所有调度约束。

步骤3: PersistentVolume 

[root@localhost configmap]# cat local-pv1.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv1
  labels: {name: local-pv1}
spec:
  capacity:
    storage: 50Gi
  volumeMode: Filesystem #Block Filesystem
  accessModes:
  - ReadWriteMany
  #- ReadWriteOnce
  #- ReadWriteOncePod
  #- ReadOnlyMany
  persistentVolumeReclaimPolicy: Retain #Delete Retain
  storageClassName: local-storage
  local:
    #path: /dev/sdb1 或 /mnt/disk/ssd #Block 
    path: /opt/wubo  #Filesystem
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - 172.16.10.5

nodeAffinity使用local卷时必须设置 PersistentVolume 。Kubernetes 调度程序使用 PersistentVolumenodeAffinity将这些 Pod 调度到正确的节点。

PersistentVolumevolumeMode可以设置为“Block”(而不是默认值“Filesystem”)以将本地卷公开为原始块设备。

使用本地卷时,建议创建一个 volumeBindingMode设置为的 StorageClass WaitForFirstConsumer。有关更多详细信息,请参阅本地StorageClass示例。延迟卷绑定确保 PersistentVolumeClaim 绑定决策也将与 Pod 可能具有的任何其他节点约束进行评估,例如节点资源要求、节点选择器、Pod 亲和性和 Pod 反亲和性

步骤4: PersistentVolumeClaim

[root@localhost configmap]# cat local-pvc1.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc1
  #namespace: wubo
spec:
  accessModes:  #访问模式
  #- ReadWriteOnce
  #- ReadWriteOncePod
  - ReadWriteMany
  #- ReadOnlyMany
  storageClassName: local-storage
  resources: #申请资源,8Gi存储空间
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      name: "local-pv1"
    #matchExpressions:
    #  - {key: environment, operator: In, values: [dev]}

查看:

Every 2.0s: kubectl get pvc,pv,sc                                                                                                                          Thu Jan 27 10:51:16 2022

NAME                               STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
persistentvolumeclaim/local-pvc1   Pending                                      local-storage   3s

NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS    REASON   AGE
persistentvolume/local-pv1   50Gi	RWX            Retain           Available           local-storage            11s

NAME                                        PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn        driver.longhorn.io             Delete          Immediate              true                   7d16h
storageclass.storage.k8s.io/local-storage   kubernetes.io/no-provisioner   Retain          WaitForFirstConsumer   false                  19s

发现了吧没有pod使用pvc的时候,pvc和pc是不会绑定的。

步骤5: pod使用pvc

[root@localhost configmap]# cat busybox.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
  labels: {name: busybox}
spec:
  replicas: 1
  selector:
    matchLabels: {name: busybox}
  template:
    metadata:
      name: busybox
      labels: {name: busybox}
    spec:
      containers:
      - name: busybox-mysql
        image: docker.io/library/busybox:1.28.4
        command: [ "httpd" ]
        args: [ "-f" ]
        volumeMounts:
        - name: volv
          mountPath: /tmp
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-pvc1

再次查看:发现-pvc和pv已经绑定了 

Every 2.0s: kubectl get pvc,pv,sc,pod                                                                                                                      Thu Jan 27 10:53:22 2022

NAME                               STATUS   VOLUME	CAPACITY   ACCESS MODES   STORAGECLASS    AGE
persistentvolumeclaim/local-pvc1   Bound    local-pv1   50Gi	   RWX            local-storage   2m10s

NAME                         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS    REASON   AGE
persistentvolume/local-pv1   50Gi	RWX            Retain           Bound    default/local-pvc1   local-storage            2m18s

NAME                                        PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE	  ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn        driver.longhorn.io             Delete          Immediate              true                   7d16h
storageclass.storage.k8s.io/local-storage   kubernetes.io/no-provisioner   Retain          WaitForFirstConsumer   false                  2m26s

NAME                           READY   STATUS    RESTARTS   AGE
pod/busybox-74b968d7dc-tz6g9   1/1     Running   0          10s

此时在删除pod应用也是pvc和pv也是Bound状态 

删除Pod/PVC,之后PV状态改为Released,该PV不会再被绑定PVC了。
需要手动删除pv。

这里使用的Local Storage还比较简单,但是也看出来跟hostPath的区别了,一是延迟消费,二是自动调度漂移到PV申明的节点上去了,无需我们再在Pod中指定了。

但是也看出了,Local Storage的使用还是稍显麻烦,每次都要去手动创建PV、创建本地路径。那是否有更方便的方式呢?====>Local StorageProvisioner,帮助我们自动创建PV并维护生命周期。

外部静态供应器:local-volume-provisioner动态创建pv

动态local 创建pv

        可以单独运行外部静态供应器,以改进对本地卷生命周期的管理。请注意,此配置器尚不支持动态配置。有关如何运行外部本地配置程序的示例,请参阅本地卷配置程序用户指南

注意:如果不使用外部静态配置器来管理卷生命周期,则本地 PersistentVolume 需要用户手动清理和删除。

local volume manager管理local pv的生命周期

[Kubernetes] Volumes_第4张图片

1.helm安装

[root@localhost sig-storage-local-static-provisioner-master]# wget https://get.helm.sh/helm-v3.8.0-linux-amd64.tar.gz

[root@localhost sig-storage-local-static-provisioner-master]#chmod +x linux-amd64/helm
[root@localhost sig-storage-local-static-provisioner-master]# mv linux-amd64/helm /usr/local/bin/

添加国内原 

[root@localhost sig-storage-local-static-provisioner-master]# helm repo remove stable
[root@localhost sig-storage-local-static-provisioner-master]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[root@localhost sig-storage-local-static-provisioner-master]# helm repo update
[root@localhost sig-storage-local-static-provisioner-master]# helm search
命令自动补全
[root@localhost sig-storage-local-static-provisioner-master]# yum install bash-completion

[root@localhost sig-storage-local-static-provisioner-master]# source /usr/share/bash-completion/bash_completion

[root@localhost sig-storage-local-static-provisioner-master]# source <(kubectl completion bash)

[root@localhost sig-storage-local-static-provisioner-master]# echo "source <(kubectl completion bash)" >> ~/.bashrc

2.clone项目

[root@localhost sig-storage-local-static-provisioner-master]# git clone https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner.git

2.1)进入解压目录后创建storageclass 

[root@localhost sig-storage-local-static-provisioner-master]# kubectl create -f deployment/kubernetes/example/default_example_storageclass.yaml
storageclass.storage.k8s.io/fast-disks created
[root@localhost sig-storage-local-static-provisioner-master]# cat deployment/kubernetes/example/default_example_storageclass.yaml
# Only create this for K8s 1.9+
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-disks
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# Supported policies: Delete, Retain
reclaimPolicy: Delete

或========================================
[root@localhost local]# cat local-storage.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-disks
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"  ## 是否设置为默认的storageclass
provisioner: kubernetes.io/no-provisioner
parameters:
  archiveOnDelete: "true"                                 ## 设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
  - hard                                                  ## 指定为硬挂载方式
  - nfsvers=4                                             ## 指定NFS版本,这个需要根据NFS Server版本号设置
volumeBindingMode: WaitForFirstConsumer #Immediate WaitForFirstConsumer
reclaimPolicy: Retain #Retain Delete
allowVolumeExpansion: true   #增加该字段表示允许动态扩容 #true false

2.2)创建目录/opt/wubo/

mkdir -p /opt/wubo

2.3 )安装local-volume-provisioner

查看文件系统

[root@localhost sig-storage-local-static-provisioner-master]# df -h -T

[root@localhost sig-storage-local-static-provisioner-master]# df -HT | grep xfs
/dev/mapper/centos-root xfs        51G   14G   38G  28% /
/dev/sda1               xfs       521M  144M  377M  28% /boot

修改helm/provisioner/values.yaml

修改的内容有:

classes:
- name: fast-disks 要和上面床架你的sc名字一样
  hostDir: /opt/wubo
  fsType: xfs #要和自己系统的系统磁盘类型一样 如ext4

daemonset:
  image: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0  #修改自己的私有镜像,如果下不下来用 googleimages/local-volume-provisioner:v2.4.0

修改后的:

[root@localhost sig-storage-local-static-provisioner-master]# cat  helm/provisioner/values.yaml 
#
# Common options.
#
common:
  #
  # Defines whether to generate rbac roles
  #
  rbac:
    # rbac.create: `true` if rbac resources should be created
    create: true
    # rbac.pspEnabled: `true` if PodSecurityPolicy resources should be created
    pspEnabled: false
  #
  # Defines whether to generate a serviceAccount
  #
  serviceAccount:
    # serviceAccount.create: Whether to create a service account or not
    create: true
    # serviceAccount.name: The name of the service account to create or use
    name: ""
  #
  # Beta PV.NodeAffinity field is used by default. If running against pre-1.10
  # k8s version, the `useAlphaAPI` flag must be enabled in the configMap.
  #
  useAlphaAPI: false
  #
  # Indicates if PVs should be dependents of the owner Node.
  #
  setPVOwnerRef: false
  #
  # Provisioner clean volumes in process by default. If set to true, provisioner
  # will use Jobs to clean.
  #
  useJobForCleaning: false
  #
  # Provisioner name contains Node.UID by default. If set to true, the provisioner
  # name will only use Node.Name.
  #
  useNodeNameOnly: false
  #
  # Resync period in reflectors will be random between minResyncPeriod and
  # 2*minResyncPeriod. Default: 5m0s.
  #
  #minResyncPeriod: 5m0s
  #
  # Mount the host's `/dev/` by default so that block device symlinks can be
  # resolved by the containers
  #
  mountDevVolume: true
  #
  # A mapping of additional HostPath volume names to host paths to create, for
  # your init container to consume
  #
  additionalHostPathVolumes: {}
  #  provisioner-proc: /proc
  #  provisioner-mnt: /mnt
  #
  # Map of label key-value pairs to apply to the PVs created by the
  # provisioner. Uncomment to add labels to the list.
  #
  #labelsForPV:
  #  pv-labels: can-be-selected
#
# Configure storage classes.
#
classes:
- name: fast-disks # Defines name of storage classe.
  # Path on the host where local volumes of this storage class are mounted
  # under.
  #hostDir: /mnt/fast-disks
  hostDir: /opt/wubo
  # Optionally specify mount path of local volumes. By default, we use same
  # path as hostDir in container.
  # mountDir: /mnt/fast-disks
  # The volume mode of created PersistentVolume object. Default to Filesystem
  # if not specified.
  volumeMode: Filesystem #Block
  # Filesystem type to mount.
  # It applies only when the source path is a block device,
  # and desire volume mode is Filesystem.
  # Must be a filesystem type supported by the host operating system.
  #fsType: ext4
  fsType: xfs
  # File name pattern to discover. By default, discover all file names.
  namePattern: "*"
  blockCleanerCommand:
  #  Do a quick reset of the block device during its cleanup.
  #  - "/scripts/quick_reset.sh"
  #  or use dd to zero out block dev in two iterations by uncommenting these lines
  #  - "/scripts/dd_zero.sh"
  #  - "2"
  # or run shred utility for 2 iteration.s
    - "/scripts/shred.sh"
    - "2"
  # or blkdiscard utility by uncommenting the line below.
  #  - "/scripts/blkdiscard.sh"
  # Uncomment to create storage class object with default configuration.
  # storageClass: true
  # Uncomment to create storage class object and configure it.
  # storageClass:
    # reclaimPolicy: Delete # Available reclaim policies: Delete/Retain, defaults: Delete.
    # isDefaultClass: true # set as default class
#
# Configure DaemonSet for provisioner.
#
daemonset:
  #
  # Defines annotations for each Pod in the DaemonSet.
  #
  podAnnotations: {}
  #
  # Defines labels for each Pod in the DaemonSet.
  #
  podLabels: {}
  #
  # Defines Provisioner's image name including container registry.
  #
  #image: k8s.gcr.io/sig-storage/local-volume-provisioner:v2.4.0
  image: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0
  #
  # Defines Image download policy, see kubernetes documentation for available values.
  #
  #imagePullPolicy: Always
  #
  # Defines a name of the Pod Priority Class to use with the Provisioner DaemonSet
  #
  # Note that if you want to make it critical, specify "system-cluster-critical"
  # or "system-node-critical" and deploy in kube-system namespace.
  # Ref: https://k8s.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical
  #
  #priorityClassName: system-node-critical
  # If configured, nodeSelector will add a nodeSelector field to the DaemonSet PodSpec.
  #
  # NodeSelector constraint for local-volume-provisioner scheduling to nodes.
  # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  nodeSelector: {}
  #
  # If configured KubeConfigEnv will (optionally) specify the location of kubeconfig file on the node.
  #  kubeConfigEnv: KUBECONFIG
  #
  # List of node labels to be copied to the PVs created by the provisioner in a format:
  #
  #  nodeLabels:
  #    - failure-domain.beta.kubernetes.io/zone
  #    - failure-domain.beta.kubernetes.io/region
  #
  # If configured, tolerations will add a toleration field to the DaemonSet PodSpec.
  #
  # Node tolerations for local-volume-provisioner scheduling to nodes with taints.
  # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
  tolerations: []
  #
  # If configured, affinity will add a affinity filed to the DeamonSet PodSpec.
  # Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
  affinity: {}
  #
  # If configured, resources will set the requests/limits field to the Daemonset PodSpec.
  # Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
  resources: {}
    # limits:
    #   memory: "512Mi"
    #   cpu: "1000m"
    # requests:
    #   memory: "32Mi"
    #   cpu: "10m"
  #
  # If set to false, containers created by the Provisioner Daemonset will run without extra privileges.
  privileged: true
  # Any init containers can be configured here.
  # Ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
  initContainers: []
#
# Configure Prometheus monitoring
#
serviceMonitor:
  enabled: false
  ## Interval at which Prometheus scrapes the provisioner
  interval: 10s
  # Namespace Prometheus is installed in defaults to release namespace
  namespace:
  ## Defaults to whats used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
  ## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
  ## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
  additionalLabels: {}
  relabelings: []
  # - sourceLabels: [__meta_kubernetes_pod_node_name]
  #   separator: ;
  #   regex: ^(.*)$
  #   targetLabel: nodename
  #   replacement: $1
  #   action: replace

#
# Overrice the default chartname or releasename
#
nameOverride: ""
fullnameOverride: ""

生成yaml文件

[root@localhost sig-storage-local-static-provisioner-master]# helm template ./helm/provisioner > deployment/kubernetes/provisioner_generated.yam

或
helm template ./helm/provisioner -f ./helm/provisioner/values.yaml > local-volume-provisioner.generated.yaml

部署pod,provisioner

kubectl apply -f deployment/kubernetes/provisioner_generated.yaml


[root@localhost sig-storage-local-static-provisioner-master]# cat deployment/kubernetes/provisioner_generated.yaml 
---
# Source: provisioner/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: release-name-provisioner
  namespace: default
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
---
# Source: provisioner/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: release-name-provisioner-config
  namespace: default
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
data:
  storageClassMap: |
    fast-disks:
      hostDir: /opt/wubo
      mountDir: /opt/wubo
      blockCleanerCommand:
        - "/scripts/shred.sh"
        - "2"
      volumeMode: Filesystem
      fsType: xfs
      namePattern: "*"
---
# Source: provisioner/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: release-name-provisioner-node-clusterrole
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get"]
---
# Source: provisioner/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: release-name-provisioner-pv-binding
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
subjects:
- kind: ServiceAccount
  name: release-name-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: system:persistent-volume-provisioner
  apiGroup: rbac.authorization.k8s.io
---
# Source: provisioner/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: release-name-provisioner-node-binding
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
subjects:
- kind: ServiceAccount
  name: release-name-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: release-name-provisioner-node-clusterrole
  apiGroup: rbac.authorization.k8s.io
---
# Source: provisioner/templates/daemonset_linux.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: release-name-provisioner
  namespace: default
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: provisioner
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: provisioner
        app.kubernetes.io/instance: release-name
      annotations:
        checksum/config: 93b518ef3d3389122a0485895f8c9c13196c09a409ec48db9a3cd4a8acdcfdbc
    spec:
      serviceAccountName: release-name-provisioner
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: provisioner
          image: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0
          securityContext:
            privileged: true
          env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: MY_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: JOB_CONTAINER_IMAGE
            value: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0
          ports:
          - name: metrics
            containerPort: 8080
          volumeMounts:
            - name: provisioner-config
              mountPath: /etc/provisioner/config
              readOnly: true
            - name: provisioner-dev
              mountPath: /dev
            - name: fast-disks
              mountPath: /opt/wubo
              mountPropagation: HostToContainer
      volumes:
        - name: provisioner-config
          configMap:
            name: release-name-provisioner-config
        - name: provisioner-dev
          hostPath:
            path: /dev
        - name: fast-disks
          hostPath:
            path: /opt/wubo
---
# Source: provisioner/templates/daemonset_windows.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: release-name-provisioner-win
  namespace: default
  labels:
    helm.sh/chart: provisioner-2.6.0-alpha.0
    app.kubernetes.io/name: provisioner
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/instance: release-name
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: provisioner
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: provisioner
        app.kubernetes.io/instance: release-name
      annotations:
        checksum/config: 93b518ef3d3389122a0485895f8c9c13196c09a409ec48db9a3cd4a8acdcfdbc
    spec:
      serviceAccountName: release-name-provisioner
      nodeSelector:
        kubernetes.io/os: windows
      tolerations:
        # an empty key operator Exists matches all keys, values and effects
        # which meants that this will tolerate everything
        - operator: "Exists"
      containers:
        - name: provisioner
          image: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0
          env:
          - name: MY_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: MY_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: JOB_CONTAINER_IMAGE
            value: harbor.jettech.com/jettechtools/store/local/local-volume-provisioner:v2.4.0
          ports:
          - name: metrics
            containerPort: 8080
          volumeMounts:
            - name: provisioner-config
              mountPath: /etc/provisioner/config
              readOnly: true
            - name: provisioner-dev
              mountPath: /dev
            - name: fast-disks
              mountPath: /opt/wubo
              mountPropagation: HostToContainer
            - name: csi-proxy-volume-v1
              mountPath: \\.\pipe\csi-proxy-volume-v1
            - name: csi-proxy-filesystem-v1
              mountPath: \\.\pipe\csi-proxy-filesystem-v1
            # these csi-proxy paths are still included for compatibility, they're used
            # only if the node has still the beta version of the CSI proxy
            - name: csi-proxy-volume-v1beta2
              mountPath: \\.\pipe\csi-proxy-volume-v1beta2
            - name: csi-proxy-filesystem-v1beta2
              mountPath: \\.\pipe\csi-proxy-filesystem-v1beta2
      volumes:
        - name: csi-proxy-volume-v1
          hostPath:
            path: \\.\pipe\csi-proxy-volume-v1
            type: ""
        - name: csi-proxy-filesystem-v1
          hostPath:
            path: \\.\pipe\csi-proxy-filesystem-v1
            type: ""
        # these csi-proxy paths are still included for compatibility, they're used
        # only if the node has still the beta version of the CSI proxy
        - name: csi-proxy-volume-v1beta2
          hostPath:
            path: \\.\pipe\csi-proxy-volume-v1beta2
            type: ""
        - name: csi-proxy-filesystem-v1beta2
          hostPath:
            path: \\.\pipe\csi-proxy-filesystem-v1beta2
            type: ""
        - name: provisioner-config
          configMap:
            name: release-name-provisioner-config
        - name: provisioner-dev
          hostPath:
            path: "C:\\dev"
            # If nothing exists at the given path, an empty directory will be
            # created there as needed with permission set to 0755,
            # having the same group and ownership with Kubelet.
            type: DirectoryOrCreate
        - name: fast-disks
          hostPath:
            path: /opt/wubo

[root@localhost k3s]# kubectl get pvc,pv,sc,pod -o wide
NAME                                     PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn     driver.longhorn.io             Delete          Immediate              true                   7d18h
storageclass.storage.k8s.io/fast-disks   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  31m

NAME                                 READY   STATUS    RESTARTS   AGE   IP            NODE           NOMINATED NODE   READINESS GATES
pod/release-name-provisioner-67kxh   1/1     Running   0          3s    10.42.1.153   172.16.10.21              
pod/release-name-provisioner-pt6pf   1/1     Running   0          3s    10.42.0.225   172.16.10.5               
pod/release-name-provisioner-grrfs   1/1     Running   0          3s    10.42.2.113   172.16.10.15              

2.3 )挂载磁盘

其Provisioner本身其并不提供local volume,但它在各个节点上的provisioner会去动态的“发现”挂载点(discovery directory),当某node的provisioner在/opt/wubo目录下发现有挂载点时,会创建PV,该PV的local.path就是挂载点,并设置nodeAffinity为该node。

只有一块盘时可模拟多个挂载点,由于一个挂载点对应一个pv,为了创建多个pv可以使用mount --bind方式,mount bind方式挂载

mount bind的一种用法,用于本地没有多余磁盘的情况

创建mount-bind.sh  shell脚本执行,配置完成后会自动创建pv

[root@localhost local]# cat mount-bind.sh 
for i in $(seq 1 5); do
  mkdir -p /opt/wubo-bind/vol${i}
  mkdir -p /opt/wubo/vol${i}
  mount --bind /opt/wubo-bind/vol${i} /opt/wubo/vol${i}
done
 
# 配置/etc/fstab永久挂载
#for i in $(seq 1 5); do
#  echo /opt/wubo-bind/vol${i} /opt/wubo/vol${i} none bind 0 0 | sudo tee -a /etc/fstab
#done

[root@localhost local]# mount | grep wubo
/dev/mapper/centos-root on /opt/wubo/vol1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /opt/wubo/vol2 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /opt/wubo/vol3 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /opt/wubo/vol4 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-root on /opt/wubo/vol5 type xfs (rw,relatime,attr2,inode64,noquota)

执行该脚本后,等待一会,执行查询pv命令,就可以发现自动创建了

[root@localhost local]# kubectl get pvc,pv,sc,pod -o wide
NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE   VOLUMEMODE
persistentvolume/local-pv-db3c6d7d   47Gi       RWO            Delete           Available           fast-disks              50s   Filesystem
persistentvolume/local-pv-97215846   47Gi       RWO            Delete           Available           fast-disks              50s   Filesystem
persistentvolume/local-pv-fb708b6f   47Gi       RWO            Delete           Available           fast-disks              50s   Filesystem
persistentvolume/local-pv-6e3e0228   47Gi       RWO            Delete           Available           fast-disks              50s   Filesystem
persistentvolume/local-pv-364b3d31   47Gi       RWO            Delete           Available           fast-disks              50s   Filesystem

NAME                                     PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn     driver.longhorn.io             Delete          Immediate              true                   7d18h
storageclass.storage.k8s.io/fast-disks   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  39m

NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
pod/release-name-provisioner-67kxh   1/1     Running   0          8m11s   10.42.1.153   172.16.10.21              
pod/release-name-provisioner-pt6pf   1/1     Running   0          8m11s   10.42.0.225   172.16.10.5               
pod/release-name-provisioner-grrfs   1/1     Running   0          8m11s   10.42.2.113   172.16.10.15              

2.4)测试pod是否可以运行

2.4.1)Deployment这种方式需要自己创建PVC,不需要创建PV了

[root@localhost local]# cat local-pvc1.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-pvc1
  #namespace: wubo
spec:
  accessModes:  #访问模式
  - ReadWriteOnce
  #- ReadWriteOncePod
  #- ReadWriteMany
  #- ReadOnlyMany
  storageClassName: fast-disks
  resources: #申请资源,8Gi存储空间
    requests:
      storage: 1Gi
  #selector:
  #  matchLabels:
  #    name: "local-pv1"
    #matchExpressions:
    #  - {key: environment, operator: In, values: [dev]}
[root@localhost local]# cat busybox.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: busybox
  namespace: default
  labels: {name: busybox}
spec:
  replicas: 1
  selector:
    matchLabels: {name: busybox}
  template:
    metadata:
      name: busybox
      labels: {name: busybox}
    spec:
      containers:
      - name: busybox-mysql
        image: docker.io/library/busybox:1.28.4
        command: [ "httpd" ]
        args: [ "-f" ]
        volumeMounts:
        - name: volv
          mountPath: /tmp
      volumes:
      - name: volv
        persistentVolumeClaim:
          claimName: local-pvc1

检测:

Every 2.0s: kubectl get pvc,pv,sc,pod -o wide                                                                                                              Thu Jan 27 14:31:08 2022

NAME                               STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE   VOLUMEMODE
persistentvolumeclaim/local-pvc1   Bound    local-pv-fb708b6f   47Gi       RWO            fast-disks     71s   Filesystem

NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                STORAGECLASS   REASON   AGE    VOLUMEMODE
persistentvolume/local-pv-db3c6d7d   47Gi	RWO            Delete           Available                        fast-disks              122m   Filesystem
persistentvolume/local-pv-97215846   47Gi	RWO            Delete           Available                        fast-disks              122m   Filesystem
persistentvolume/local-pv-6e3e0228   47Gi	RWO            Delete           Available                        fast-disks              122m   Filesystem
persistentvolume/local-pv-364b3d31   47Gi	RWO            Delete           Available                        fast-disks              122m   Filesystem
persistentvolume/local-pv-fb708b6f   47Gi	RWO            Delete           Bound       default/local-pvc1   fast-disks              122m   Filesystem

NAME                                     PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn     driver.longhorn.io             Delete          Immediate              true                   7d20h
storageclass.storage.k8s.io/fast-disks   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  161m

NAME                                 READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
pod/release-name-provisioner-67kxh   1/1     Running   0          130m   10.42.1.153   172.16.10.21              
pod/release-name-provisioner-pt6pf   1/1     Running   0          130m   10.42.0.225   172.16.10.5               
pod/release-name-provisioner-grrfs   1/1     Running   0          130m   10.42.2.113   172.16.10.15              
pod/busybox-74b968d7dc-vvb2p         1/1     Running   0          58s    10.42.0.227   172.16.10.5               

2.4.2)StatefulSet+volumeClaimTemplates 方式

不需要创建pv和pvc

[root@localhost local]# cat busybox-statefulset.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: {name: busybox-headless}
  name: busybox-headless
  #namespace: wubo
spec:
  ports:
  - {name: t9081, port: 81, protocol: TCP, targetPort: 80}
  selector: {name: busybox-headless}
  #type: NodePort
  clusterIP: None ##注意此处的值,None表示无头服务
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: busybox-headless
  #namespace: wubo
  labels: {name: busybox-headless}
spec:
  serviceName: "busybox-headless"
  replicas: 3 #三个副本
  selector:
    matchLabels: {name: busybox-headless}
  template:
    metadata:
      name: busybox-headless
      labels: {name: busybox-headless}
    spec:
      containers:
      - name: busybox-headless
        #image: 172.16.10.5:5000/library/busybox:1.21.4
        image: docker.io/library/busybox:1.28.4
        command:
        - "/bin/sh"
        args:
        - "-c"
        - "sleep 100000"
        volumeMounts:
        - name: local-pvc1
          mountPath: /tmp
  volumeClaimTemplates:
  - metadata:
      name: local-pvc1
      #annotations:
      #  volume.beta.kubernetes.io/storage-class: "fast-disks"   #managed-nfs-storage为我们创建的storage-class名称
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "fast-disks"
      resources:
        requests:
          storage: 1Gi

查看:三个pod对应三个pvc,三个pvc对应三个pv

[root@localhost local]# kubectl get pvc,pv,sc,pod -o wide
NAME                                                  STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
persistentvolumeclaim/local-pvc1-busybox-headless-0   Bound    local-pv-db3c6d7d   47Gi       RWO            fast-disks     2m50s   Filesystem
persistentvolumeclaim/local-pvc1-busybox-headless-1   Bound    local-pv-6e3e0228   47Gi       RWO            fast-disks     2m47s   Filesystem
persistentvolumeclaim/local-pvc1-busybox-headless-2   Bound    local-pv-97215846   47Gi       RWO            fast-disks     2m45s   Filesystem

NAME                                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS   REASON   AGE    VOLUMEMODE
persistentvolume/local-pv-364b3d31   47Gi       RWO            Delete           Available                                           fast-disks              146m   Filesystem
persistentvolume/local-pv-fb708b6f   47Gi       RWO            Delete           Available                                           fast-disks              16m    Filesystem
persistentvolume/local-pv-db3c6d7d   47Gi       RWO            Delete           Bound       default/local-pvc1-busybox-headless-0   fast-disks              146m   Filesystem
persistentvolume/local-pv-6e3e0228   47Gi       RWO            Delete           Bound       default/local-pvc1-busybox-headless-1   fast-disks              146m   Filesystem
persistentvolume/local-pv-97215846   47Gi       RWO            Delete           Bound       default/local-pvc1-busybox-headless-2   fast-disks              146m   Filesystem

NAME                                     PROVISIONER                    RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/longhorn     driver.longhorn.io             Delete          Immediate              true                   7d20h
storageclass.storage.k8s.io/fast-disks   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  3h5m

NAME                                 READY   STATUS    RESTARTS   AGE    IP            NODE           NOMINATED NODE   READINESS GATES
pod/release-name-provisioner-67kxh   1/1     Running   0          154m   10.42.1.153   172.16.10.21              
pod/release-name-provisioner-pt6pf   1/1     Running   0          154m   10.42.0.225   172.16.10.5               
pod/release-name-provisioner-grrfs   1/1     Running   0          154m   10.42.2.113   172.16.10.15              
pod/busybox-headless-0               1/1     Running   0          6s     10.42.0.246   172.16.10.5               
pod/busybox-headless-1               1/1     Running   0          5s     10.42.0.248   172.16.10.5               
pod/busybox-headless-2               1/1     Running   0          4s     10.42.0.2     172.16.10.5               

测试数据:

[root@localhost local]# kubectl exec -it   pod/busybox-headless-0    -- sh
/ # cd /tmp/
/tmp # mkdir wubo
/tmp # echo aa >wubo/a
/tmp # exit
[root@localhost local]# kubectl exec -it   pod/busybox-headless-1    -- sh
/ # cd /tmp/
/tmp # mkdir wuqi
/tmp # echo bbb > wuqi/b
/tmp # exit
[root@localhost local]# kubectl exec -it   pod/busybox-headless-2    -- sh
/ # cd /tmp/
/tmp # mkdir jettech
/tmp # echo ccc > jettech/c
/tmp # 



物理磁盘:
[root@localhost sig-storage-local-static-provisioner-master]# find /opt/wubo
/opt/wubo
/opt/wubo/vol1
/opt/wubo/vol1/wubo
/opt/wubo/vol1/wubo/a
/opt/wubo/vol2
/opt/wubo/vol2/jettech
/opt/wubo/vol2/jettech/c
/opt/wubo/vol3
/opt/wubo/vol4
/opt/wubo/vol4/wuqi
/opt/wubo/vol4/wuqi/b
/opt/wubo/vol5
[root@localhost sig-storage-local-static-provisioner-master]# 

local-volume-provisioner pv为啥都是RWO  在创建PVC指定别的就绑定不上,动态PV的access mode有谁决定的?提供者?

2.9 NFS 参考NFS实例 静态和动态

 2.10 portworx Volume

portworxVolume是与 Kubernetes 超融合运行的弹性块存储层。Portworx对服务器中的存储、基于功能的分层以及跨多个服务器的聚合容量进行指纹存储。Portworx 在虚拟机或裸机 Linux 节点上以客户机内运行。

AportworxVolume可以通过 Kubernetes 动态创建,也可以在 Pod 中预先配置和引用。

Portworx的价值分析_zhangjizhangji的博客-CSDN博客

examples/README.md at master · kubernetes/examples · GitHub

Portworx Documentation

OpenEBS、Rook和Rancher Longhorn是开源的,其它都是需要付费的(Portworx有免费版本,但是部分功能受限)

关于rook-ceph的部署可以看这里,不过在了解了longhorn之后决定使用longhorn

Longhorn经过近今年的发展目前已经相对成熟,在其features描述中,其为企业级应用 ​业级应用 ​

2.11 secret  参考Configuring Secrets

  

你可能感兴趣的:(kubernetes,容器,云原生)