多路并行读写
多路只读
单路读写
因为kubelet是管理容器的,所以存储卷也是由它来管理的;
kubelet在管理时需要依赖存储卷插件,自己本身就支持一些插件。
存储卷插件(也可以叫类型)又分为In-Tree(内部kubelet支持的)和Out-of-Tree(通过CSI接口接入的这种,CSI全名:Container Storage Interface(容器存储接口))
- HOST级别:hostPath、Local
- 网络级别:NFS、GlusterFS、CephFS、rbd(块设备)
云存储:awsEBS…
SAN(Storage Area Network):FC,iSCSI, …
SDS(Software Defined Storage): Ceph(rbd, cephfs)、…- 临时存储:emptyDir
- pvc也是In-Tree的插件
Longhorn
kubectl explain pods.spec.volumes
- 在pod级别定义存储卷
- 在容器级别挂载存储卷
kubectl explain pods.spec.volumes可以看到以下选项
spec:
volumes:
- name <string> # 存储卷名称标识,仅可使用DNS标签格式的字符,在当前Pod中必须唯一
VOL_TYPE <Object> # 存储卷插件及具体的目标存储供给方的相关配置
containers:
- name: …
image: …
volumeMounts:
- name <string> # 要挂载的存储卷的名称,必须匹配存储卷列表中某项的定义
mountPath <string> # 容器文件系统上的挂载点路径
readOnly <boolean> # 是否挂载为只读模式,默认为“否”
subPath <string> # 挂载存储卷上的一个子目录至指定的挂载点
subPathExpr <string> # 挂载由指定的模式匹配到的存储卷的文件或目录至挂载点
介绍:这个yaml文件中定义了一个初始化容器和主容器,还有pod级别声明了存储卷,初始化容器和主容器都进行了挂载
- 首先在Pod级别定义一个名为config-file-store的volume,而且使用的是内存空间,大小限制为16mb
- 定义一个初始化容器并将config-file-store卷挂载到初始化容器的/data目录下,初始化容器负责下载配置文件到/data中
- 定义一个主容器并将config-file-store卷以只读的方式挂载到/etc/envoy/下,之后主容器运行命令就可以启动了
apiVersion: v1
kind: Pod
metadata:
name: volumes-emptydir-demo
namespace: default
spec:
initContainers:
- name: config-file-downloader
image: ikubernetes/admin-box
imagePullPolicy: IfNotPresent
command: ['/bin/sh','-c','wget -O /data/envoy.yaml http://ilinux.io/envoy.yaml']
volumeMounts:
- name: config-file-store
mountPath: /data
containers:
- name: envoy
image: envoyproxy/envoy-alpine:v1.13.1
command: ['/bin/sh','-c']
args: ['envoy -c /etc/envoy/envoy.yaml']
volumeMounts:
- name: config-file-store
mountPath: /etc/envoy
readOnly: true
volumes:
- name: config-file-store
emptyDir:
medium: Memory #如果要使用磁盘就使用""就可以了,但是这里只会在节点上创建一个临时存储目录
sizeLimit: 16Mi
hostPath文件类型(yaml文件中的pods.spec.volumes.hostPath.type字段)
- File:事先必须存在的文件路径;
- Directory:事先必须存在的目录路径;
- DirectoryOrCreate:指定的路径不存时自动将其创建为0755权限的空目录,属主属组均为kubelet;
- FileOrCreate:指定的路径不存时自动将其创建为0644权限的空文件,属主和属组同为kubelet;
- Socket:事先必须存在的Socket文件路径;
- CharDevice:事先必须存在的字符设备文件路径;
- BlockDevice:事先必须存在的块设备文件路径;
- “”:空字符串,默认配置,在关联hostPath存储卷之前不进行任何检查
vim volumes-hostpath-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-hostpath-demo
namespace: dev
spec:
containers:
- name: filebeat
image: ikubernetes/filebeat:5.6.7-alpine
env:
- name: REDIS_HOST
value: redis.ilinux.io:6379
- name: LOG_LEVEL
value: info
volumeMounts:
- name: varlog
mountPath: /var/log
- name: socket
mountPath: /var/run/docker.sock
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: socket
hostPath:
path: /var/run/docker.sock
1.查看运行状态
[root@master01 volumes]# kubectl get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
volumes-hostpath-demo 1/1 Running 0 30s 10.244.1.25 node01 <none> <none>
2.进入容器内部看挂载点是不是有这些文件,可以看到已经挂载进来了
[root@master01 volumes]# kubectl exec -n dev -it volumes-hostpath-demo -- /bin/sh
/ # ls -l /var/run/docker.sock
srw-rw---- 1 root 985 0 Jun 19 05:21 /var/run/docker.sock
注意下方的yaml文件中定义了一个1099用户,其中1099用户必须在nfs服务器上也存在,并且要对nfs服务端中的共享目录有读写权限,这样容器中的1099用户才可以写入挂载的这个目录
而且k8s的node节点上也需要支持mount -t nfs,pod内的容器才能支持挂载
nfs系统中也需要存在1099用户
cat /etc/passwd
redis-user:x :1099:1099::/home/redis-user:/bin/bash
nfs共享目录权限:
drwxr-xr-x. 2 redis-user root 6 9月 14 21:36 redis
drwxr-xr-x. 2 redis-user root 31 6月 19 17:38 redis001
drwxr-xr-x. 2 redis-user root 6 6月 19 16:51 redis002
drwxr-xr-x. 2 redis-user root 6 6月 19 16:51 redis003
drwxr-xr-x. 2 redis-user root 6 6月 19 16:51 redis004
drwxr-xr-x. 2 redis-user root 6 6月 19 16:51 redis005
[root@node02 ~]# useradd -u 1099 redis-user
[root@node02 ~]# mkdir /data/redis -p
[root@node02 ~]# chown 1099 -R /data/redis/
配置nfs
[root@node02 ~]# cat /etc/exports
/data/redis 192.168.8.0/24(rw) #这里是集群节点的IP段
vim volumes-nfs-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-nfs-demo
namespace: dev
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redisport
securityContext:
runAsUser: 1099
volumeMounts:
- mountPath: /data
name: redisdata
volumes:
- name: redisdata
nfs:
server: 192.168.8.30
path: /data/redis
readOnly: false
[root@master01 volumes]# kubectl apply -f volumes-nfs-demo.yaml
进入容器内,发现有dump.rdb
kubectl exec -n dev -it volumes-nfs-demo -- /bin/sh
/data $ ls -l
total 4
-rw-r--r-- 1 1099 nobody 92 Jun 19 07:51 dump.rdb
创建文件测试
/data $ touch a
查看nfs-server服务器上的目录中是否有
[root@node02 redis]# pwd
/data/redis
[root@node02 redis]# ls
a dump.rdb
apiVersion: v1
kind: Pod
metadata:
name: volumes-rbd-demo
namespace: dev
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redisport
volumeMounts:
- mountPath: /data
name: redis-rbd-vol
volumes:
- name: redis-rbd-vol
rbd:
monitors:
- '192.168.8.61:6789'
- '192.168.8.62:6789'
- '192.168.8.63:6789'
pool: kube
image: redis-img1
fsType: xfs
readOnly: false
user: kube
keyring: /etc/ceph/ceph.client.kube.keyring
#密钥保存在节点上的这个文件内,这意味这每个节点都需要有这个文件
apiVersion: v1
kind: Pod
metadata:
name: volumes-glusterfs-demo
labels:
app: redis
spec:
containers:
- name: redis
image: redis:alpine
ports:
- containerPort: 6379
name: redisport
volumeMounts:
- mountPath: /data
name: redisdata
volumes:
- name: redisdata
glusterfs:
endpoints: glusterfs-endpoints
path: kube-redis
readOnly: false
apiVersion: v1
kind: Pod
metadata:
name: volumes-cephfs-demo
spec:
containers:
- name: redis
image: redis:alpine
volumeMounts:
- mountPath: "/data"
name: redis-cephfs-vol
volumes:
- name: redis-cephfs-vol
cephfs:
monitors:
- 172.29.200.1:6789
- 172.29.200.2:6789
- 172.29.200.3:6789
path: /kube/namespaces/default/redis1
user: fsclient
secretFile: "/etc/ceph/fsclient.key"
readOnly: false
vim volumes-gitrepo-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-gitrepo-demo
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
gitRepo:
repository: https://github.com/iKubernetes/Kubernetes_Advanced_Practical_2rd.git
directory: .
revision: "master"
apiVersion: v1
kind: Pod
metadata:
name: volume-cinder-demo
spec:
containers:
- image: mysql
name: mysql
args:
- "--ignore-db-dir"
- "lost+found"
env:
- name: MYSQL_ROOT_PASSWORD
value: YOUR_PASS
ports:
- containerPort: 3306
name: mysqlport
volumeMounts:
- name: mysqldata
mountPath: /var/lib/mysql
volumes:
- name: mysqldata
cinder:
volumeID: e2b8d2f7-wece-90d1-a505-4acf607a90bc
fsType: ext4
vim volumes-longhorn-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-longhorn-demo
namespace: dev
spec:
nodeName: node03
containers:
- name: redis
image: redis:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
name: redisport
volumeMounts:
- mountPath: /data
name: redis-data-vol
volumes:
- name: redis-data-vol
persistentVolumeClaim:
claimName: pvc-dyn-longhorn-demo
登录部署好的pod内的容器
登录redis设置key并保存,可以看到是可读写的
kubectl exec -n dev -it volumes-longhorn-demo -- /bin/sh
/data # redis-cli
127.0.0.1:6379> set test 1
OK
127.0.0.1:6379> get test
"1"
127.0.0.1:6379> BGSAVE
Background saving started
127.0.0.1:6379> exit
/data # ls
dump.rdb lost+found
pv和pvc是一对一的,一个pv只能绑定一个pvc,一个pvc也只能绑定一个pv
PVC: Persistent Volume Claim,持久卷申请,简称PVC;k8s上标准的资源类型之一; 由用户使用;名称空间级别;
PV: Persistent Volume,持久卷,可被PVC绑定;而PV一定要与某个真正的存储空间(一般是网络存储服务上的存储空间)对应起来,才能真正存储数据。由集群管理员负责管理。集群级别。
- 集群管理员Admin:创建好PV;
- User: 按需创建PVC(PVC创建完成后,需要找到最为匹配的PV,并与之绑定),而后创建Pod,在Pod调用persistentVolumeClaim类型的存储卷插件调用同一个名称空间中的PVC资源
二者要么都不属于任何StorageClass资源,要么属于同一个StorageClass资源
经过pvc中定义的条件匹配合适的pv
StorageClass:模板, 简称SC;PV和PVC都可属于某个特定的SC;
作用:类似k8s中名称空间:一个PVC只能够在自己所处的SC内找PV;或者,一个不属于任何SC的PVC只能够在不属于任何SC的PV中进行筛选;
工作原理(这里工作原理是以ceph来举例的,可见:本文章 2.3.3.1 ceph类型):当用户在以下名为fast-rbd存储类内创建了一个pvc,而这个类中并没有合适的pv匹配这个pvc的话,这个类就会利用定义好的账号密码去ceph里创建一个image(在这里的意思是磁盘,因为这是rbd里的image),然后创建一个pv。
StorageClass资源可以写编写一个创建PV的模板:可以将某个存储服务与SC关联起来,并且将该存储服务的管理接口提供给SC,从而让SC能够在存储服务上CRUD(Create、Read、Update和Delete)存储单元;因而,在同一个SC上声明PVC时,若无现存可匹配的PV,则SC能够调用管理接口直接创建出一个符合PVC声明的需求的PV来。这种PV的提供机制,就称为Dynamic Provision(动态提供)。
ceph中的rbd支持动态预配,但kubeadm部署的k8s集群,却不支持该功能,原因在于kube-controller-manager是一个镜像,而镜像内部未内置ceph客户端工具。Controller Manager是一个静态Pod,其中含有PV Controller和PVC Controller
以下是制作有ceph客户端工具的kube-controller-manager镜像的Dockerfile
ARG KUBE_VERSION="v1.19.4"
FROM registry.aliyuncs.com/google_containers/kube-controller-manager:${KUBE_VERSION}
RUN apt update && apt install -y wget gnupg lsb-release
ARG CEPH_VERSION="octopus"
RUN wget -q -O - https://mirrors.aliyun.com/ceph/keys/release.asc | apt-key add - && \
echo deb https://mirrors.aliyun.com/ceph/debian-${CEPH_VERSION}/ $(lsb_release -sc) main > /etc/apt/sources.list.d/ceph.list && \
apt update && \
apt install -y ceph-common ceph-fuse
RUN rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/*
kubectl explain PersistentVolume.spec
除了存储卷插件之外,PersistentVolume资源规范Spec字段主要支持嵌套以下几个通用字段,它们用于定义PV的容量、访问模式和回收策略等属性。
- capacity
- accessModes <[]string>:指定当前PV支持访问模式;存储系统支持存取能力大体可分为ReadWriteOnce(RWO单路读写)、ReadOnlyMany(ROX多路只读)和ReadWriteMany(RWX多路读写)三种类型,某个特定的存储系统可能会支持其中的部分或全部的能力。
- persistentVolumeReclaimPolicy
:PV空间被释放时的处理机制;可用类型仅为Retain(默认保留)、Recycle(回收,把pv内的数据清除,被废弃了)或Delete(pv直接删除)。目前,仅nfs和hostPath支持Recycle策略,也仅有部分存储系统支持Delete策略。 - volumeMode
:该PV的卷模型,用于指定此存储卷被格式化为文件系统使用还是直接使用裸格式的块设备;默认值为Filesystem,仅块设备接口的存储系统支持该功能。 - storageClassName
:当前PV所属的StorageClass资源的名称,指定的存储类需要事先存在;默认为空值,即不属于任何存储类。 - mountOptions
:挂载选项组成的列表,例如ro、soft和hard等。 - nodeAffinity
定义PVC时,用户可通过访问模式(accessModes)、数据源(dataSource)、存储资源空间需求和限制(resources)、存储类、标签选择器、卷模型和卷名称等匹配标准来筛选集群上的PV资源,其中,resources和accessModes是最重的筛选标准。PVC的Spec字段的可嵌套字段有如下几个。
- accessModes <[]string>:PVC的访问模式;它同样支持RWO、RWX和ROX三种模式;
- dataSources
- resources
- selector
- storageClassName
:该PVC资源隶属的存储类资源名称;指定了存储类资源的PVC仅能在同一个存储类下筛选PV资源,否则,就只能从所有不具有存储类的PV中进行筛选; - volumeMode
:卷模型,用于指定此卷可被用作文件系统还是裸格式的块设备;默认值为Filesystem; - volumeName
:直接指定要绑定的PV资源的名称。
StorageClass资源的期望状态直接与apiVersion、kind和metadata定义于同一级别而无须嵌套于spec字段中,它支持使用的字段包括如下几个:
- allowVolumeExpansion
:是否支持存储卷空间扩展功能; - allowedTopologies <[]Object>:定义可以动态配置存储卷的节点拓扑,仅启用了卷调度功能的服务器才会用到该字段;每个卷插件都有自己支持的拓扑规范,空的拓扑选择器表示无拓扑限制;
- provisioner
:必选字段,用于指定存储服务方(provisioner,或称为预备器),存储类要依赖该字段值来判定要使用的存储插件以便适配到目标存储系统;Kubernetes内建支持许多的Provisioner,它们的名字都以kubernetes.io/为前缀,例如kubernetes.io/glusterfs等; - parameters
- reclaimPolicy
:由当前存储类动态创建的PV资源的默认回收策略,可用值为Delete(默认删除pv)和Retain(保留pv)两个;但那些静态PV的回收策略则取决于它们自身的定义; - volumeBindingMode
:定义如何为PVC完成预配和绑定,默认值为VolumeBindingImmediate;该字段仅在启用了存储卷调度功能时才能生效; - mountOptions <[]string>:由当前类动态创建的PV资源的默认挂载选项列表。
创建目录
mkdir /data/redis00{1,2,3,4,5}
配置nfs配置文件
[root@node02 data]# cat /etc/exports
/data/redis 192.168.8.0/24(rw)
/data/redis001 192.168.8.0/24(rw)
/data/redis002 192.168.8.0/24(rw)
/data/redis003 192.168.8.0/24(rw)
/data/redis004 192.168.8.0/24(rw)
/data/redis005 192.168.8.0/24(rw)
pv是集群级别的所以不用指定命名空间
这里多创建几个pv为之后的pvc测试
vim pv-nfs-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-demo
spec:
#设置PV的大小
capacity:
storage: 5Gi
#设置PV的卷模型
volumeMode: Filesystem
#设置PV访问权限
accessModes:
- ReadWriteMany
#指定PV回收策略
persistentVolumeReclaimPolicy: Retain
#挂载选项组成的列表
mountOptions:
- hard
- nfsvers=4.1
#配置nfs
nfs:
path: "/data/redis001"
server: 192.168.8.30
vim pv-nfs-002.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-002
namespace: dev
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/data/redis002"
server: 192.168.8.30
vim pv-nfs-003.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs-003
namespace: dev
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/data/redis003"
server: 192.168.8.30
[root@master01 volumes]# kubectl apply -f pv-nfs-demo.yaml
persistentvolume/pv-nfs-demo created
可以看到pv了
[root@master01 volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-002 10Gi RWX Retain Available 2m58s
pv-nfs-003 1Gi RWO Retain Available 2m55s
pv-nfs-demo 5Gi RWX Retain Available 8m5s
pv创建方式见本文章的 2.3.1.1 pv-nfs
[root@master01 volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-002 10Gi RWX Retain Available 2m58s
pv-nfs-003 1Gi RWO Retain Available 2m55s
pv-nfs-demo 5Gi RWX Retain Available 8m5s
vim pvc-demo-001.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo-001
namespace: dev
spec:
accessModes: ["ReadWriteMany"]
volumeMode: Filesystem
resources:
requests:
storage: 3Gi
limits:
storage: 10Gi
[root@master01 volumes]# kubectl apply -f pvc-demo-001.yaml
persistentvolumeclaim/pvc-demo-001 created
可以看到这里pvc根据咱们指定的条件选择的是pv-nfs-demo这个pv
[root@master01 volumes]# kubectl get pvc -n dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-demo-001 Bound pv-nfs-demo 5Gi RWX 8s
创建yaml文件
vim volumes-pvc-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: volumes-pvc-demo
namespace: dev
spec:
containers:
- name: redis
image: redis:alpine
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1099
ports:
- containerPort: 6379
name: redisport
volumeMounts:
- mountPath: /data
name: redis-rbd-vol
volumes:
- name: redis-rbd-vol
persistentVolumeClaim:
claimName: pvc-demo-0001
[root@master01 volumes]# kubectl apply -f volumes-pvc-demo.yaml
pod/volumes-pvc-demo created
可以看到已经使用了
[root@master01 volumes]# kubectl describe pods volumes-pvc-demo -n dev
Name: volumes-pvc-demo
Namespace: dev
Priority: 0
Node: node02/192.168.8.30
Start Time: Sun, 19 Jun 2022 17:27:03 +0800
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.2.35
IPs:
IP: 10.244.2.35
Containers:
redis:
Container ID: docker://1bd53901b0f2d2023f9f3305837e59f0d0ceaf14584ef0bd45c2a3a27b2971f0
Image: redis:alpine
Image ID: docker-pullable://redis@sha256:4bed291aa5efb9f0d77b76ff7d4ab71eee410962965d052552db1fb80576431d
Port: 6379/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 19 Jun 2022 17:27:04 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/data from redis-rbd-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vhpbs (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
redis-rbd-vol:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-demo-001
ReadOnly: false
default-token-vhpbs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vhpbs
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59s default-scheduler Successfully assigned dev/volumes-pvc-demo to node02
Normal Pulled 60s kubelet Container image "redis:alpine" already present on machine
Normal Created 60s kubelet Created container redis
Normal Started 59s kubelet Started container redis
[root@master01 volumes]# kubectl exec -n dev -it volumes-pvc-demo -- /bin/sh
/data $ ls
dump.rdb
安装链接:https://blog.csdn.net/martinlinux/article/details/125364199
vim pvc-dyn-longhorn-demo
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dyn-longhorn-demo
namespace: dev
spec:
accessModes: ["ReadWriteOnce"]
volumeMode: Filesystem
resources:
requests:
storage: 3Gi
limits:
storage: 10Gi
storageClassName: longhorn
首先查看没有与之匹配的pv
[root@master01 volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-002 10Gi RWX Retain Available 6h39m
pv-nfs-003 1Gi RWO Retain Available 6h39m
部署
[root@master01 volumes]# kubectl apply -f pvc-dyn-longhorn-demo.yaml
persistentvolumeclaim/pvc-dyn-longhorn-demo created
发现已经创建好了pv
[root@master01 volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs-002 10Gi RWX Retain Available 6h40m
pv-nfs-003 1Gi RWO Retain Available 6h40m
pvc-7aff1b0a-eadf-4e96-81db-ad831568007d 3Gi RWO Retain Bound dev/pvc-dyn-longhorn-demo longhorn 17s
并且看到了pvc绑定的情况
[root@master01 volumes]# kubectl get pvc -n dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-dyn-longhorn-demo Bound pvc-7aff1b0a-eadf-4e96-81db-ad831568007d 3Gi RWO longhorn 78s
见本文章的:1.3.9 CSI接口longhorn类型
介绍:当用户在以下名为fast-rbd存储类内创建了一个pvc,而这个类中并没有合适的pv匹配这个pvc的话,这个类就会利用定义好的账号密码去ceph里创建一个image(在这里的意思是磁盘,因为这是rbd里的image),然后创建一个pv。
vim storageclass-rbd-demo.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-rbd
provisioner: kubernetes.io/rbd
parameters:
monitors: ceph01.ilinux.io:6789,ceph02.ilinux.io:6789,ceph03.ilinux.io:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-kube-secret
userSecretNamespace: kube-system
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
reclaimPolicy: Retain
卷插件称为 Provisioner(存储分配器),NFS 使用的是 nfs-client,这个外部卷插件会使用已经配置好的 NFS 服务器自动创建 PV。 Provisioner:用于指定 Volume 插件的类型,包括内置插件(如 kubernetes.io/aws-ebs)和外部插件(如 external-storage 提供的 ceph.com/cephfs)
mkdir /opt/k8s
chmod 777 /opt/k8s #设置权限777,如果不加,读写nfs的用户就是当前这个目录的属主和属组,其他用户并不能读写,所以挂载到容器中也不能读写,所以这里的目录需要777权限
cat << EOF >> /etc/exports
/opt/k8s 192.168.8.0/24(rw,no_root_squash,sync)
EOF
systemctl restart nfs
cat << EOF > nfs-client-rbac.yaml
#创建 Service Account 账户,用来管理 NFS Provisioner 在 k8s 集群中运行的权限
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
#创建集群角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-clusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
#集群角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nfs-client-provisioner-clusterrolebinding
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-clusterrole
apiGroup: rbac.authorization.k8s.io
EOF
由于 1.20 版本启用了 selfLink,所以 k8s 1.20+ 版本通过 nfs provisioner 动态生成pv会报错,解决方法如下:修改apiserver配置
vim /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
- --feature-gates=RemoveSelfLink=false #添加这一行
- --advertise-address=192.168.80.20
......
kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
#创建的这个在哪个命名空间都可以,需要和rbac中的ClusterRoleBinding在一个命名空间
cat << EOF > nfs-client-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #指定Service Account账户
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage #配置provisioner的Name,确保该名称与StorageClass资源中的provisioner名称保持一致
- name: NFS_SERVER
value: 192.168.8.103 #配置绑定的nfs服务器
- name: NFS_PATH
value: /opt/k8s #配置绑定的nfs服务器目录
volumes: #申明nfs数据卷
- name: nfs-client-root
nfs:
server: 192.168.8.103
path: /opt/k8s
EOF
cat << EOF > nfs-client-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client-storageclass
namespace: pub-prod
provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
parameters:
archiveOnDelete: "false" #false表示在删除PVC时不会对数据进行存档,即删除数据
EOF
cat << EOF > test-pvc-pod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nfs-pvc
namespace: pub-prod
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client-storageclass #关联StorageClass对象
resources:
requests:
storage: 2Gi
EOF
#创建POD
cat << EOF > test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
namespace: pub-prod
spec:
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: test-nfs-pvc
EOF
kubectl apply -f nfs-client-rbac.yaml
kubectl apply -f nfs-client-storageclass.yaml
kubectl apply -f nfs-client-provisioner.yaml
kubectl apply -f test-pvc-pod.yaml
kubectl apply -f test-pod.yaml
#查看pvc和pv
[root@master01 nfs]# kubectl get pvc -n pub-prod
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-nfs-pvc Bound pvc-ba2af4c3-dcde-49ec-be00-a8d0c92e6245 2Gi RWO nfs-client-storageclass 42m
[root@master01 nfs]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-ba2af4c3-dcde-49ec-be00-a8d0c92e6245 2Gi RWO Delete Bound pub-prod/test-nfs-pvc nfs-client-storageclass 42m
#查看是否自动创建了共享目录
[root@node02 k8s]# pwd
/opt/k8s
[root@node02 k8s]# ls
pub-prod-test-nfs-pvc-pvc-ba2af4c3-dcde-49ec-be00-a8d0c92e6245
#创建文件测试是否可以在pod中访问到
[root@node02 k8s]# cd pub-prod-test-nfs-pvc-pvc-ba2af4c3-dcde-49ec-be00-a8d0c92e6245/
[root@node02 pub-prod-test-nfs-pvc-pvc-ba2af4c3-dcde-49ec-be00-a8d0c92e6245]# cat index.html
this is nfs-storageclass web!!!
[root@node02 k8s]# curl 10.244.2.3
this is nfs-storageclass web!!!