Kubeadm安装的集群版本: V1.21.0
IP地址 | 用途 |
---|---|
192.168.2.200 | Master |
192.168.2.203 | NFS服务端 |
192.168.2.241 | Node01 |
192.168.2.244 | Node02 |
操作系统: CentOS Linux release 7.9.2009
内核版本: 4.18.9-1.el7.elrepo.x86_64
# 停止并禁用防火墙
$ sudo systemctl stop firewalld
$ sudo systemctl disable firewalld
# 关闭并禁用SELinux
$ sudo setenforce 0
$ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
$ sudo yum install -y nfs-utils rpcbind
# 创建文件夹
$ mkdir /data/nfs_storage
# 更改归属组与用户
$ chown -R nfsnobody:nfsnobody /data/nfs_storage
# 编辑exports
$ sudo vi /etc/exports
# 输入以下内容(格式:FS共享的目录 NFS客户端地址1(参数1,参数2,...) 客户端地址2(参数1,参数2,...))
/data/nfs_storage 192.168.2.0/24(rw,no_root_squash,no_all_squash,sync)
常用选项:
用户映射:
$ sudo systemctl enable rpcbind.service
$ sudo systemctl enable nfs-server.service
$ sudo systemctl start nfs
$ sudo systemctl start rpcbind
$ sudo showmount -e 192.168.2.203
分为静态供给和动态供给两种方式.
静态供给: 提前创建好PV。
动态供给:
NFS Provisioner 是一个自动配置卷程序,它使用现有的和已配置的 NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。
$ cat nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get","list","watch","create","delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get","list","watch","update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create","update","patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get","list","watch","create","update","patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
nfs-client-provisioner
是一个 Kubernetes 的简易NFS的外部 provisioner
,本身不提供NFS,需要现有的NFS服务器提供存储。$ cat nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: kube-system #与RBAC文件中的namespace保持一致
labels:
k8s-app: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
k8s-app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
k8s-app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: gxf-nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
- name: NFS_SERVER
value: 192.168.2.203 #NFS服务端的IP地址
- name: NFS_PATH
value: /data/nfs_storage # NFS提供的存储卷
volumes:
- name: nfs-client-root
nfs:
server: 192.168.2.203
path: /data/nfs_storage # NFS提供的存储卷
执行以下命令进行部署
$ kubectl apply -f nfs-rbac.yaml
$ kubectl apply -f nfs-provisioner.yaml
$ kubectl get pods -l k8s-app=nfs-client-provisioner -n kube-system
$ kubectl get pods -l k8s-app=nfs-client-provisioner -n kube-system -o wide
StorageClass是对存储资源的抽象定义,对用户设置的PVC申请屏蔽后端存储的细节,减轻管理员手工管理PV的工作,由系统自动完成PV的创建和绑定,实现动态资源供给。
StorageClass的定义主要包括名称、后端存储的提供者(provisioner)和后端存储对应的参数。对于后端NFS存储来说,配置相对简单,只需要指定provisioner即可。
$ cat nfs-StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: gxf-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
reclaimPolicy: Retain # 默认为delete
parameters:
archiveOnDelete: "true" # false表示pv被删除时,在nfs下面对应的文件夹也会被删除,true正相反
nfs-StorageClass.yaml
中reclaimPolicy
不写(或者为delete
, 默认),archiveOnDelete: “false”
,当pvc删除时,对应的pv也会被自动删除,而且nfs文件目录中的文件也同时被删除;nfs-StorageClass.yaml
中reclaimPolicy: retain
,archiveOnDelete: “true”
,pvc删除后需要手工删除pv,nfs上文件不会被删除。$ kubectl apply -f nfs-StorageClass.yaml
$ kubectl get sc
通过后续部署的Prometheus+Grafana监控平台进行验证。
使用kubectl describe pod "名称"
,查看详细信息并排查错误。