PersistentVolume(持久卷,简称PV)是集群内,由管理员提供的网络存储的一部分。就像集群中的节点一样,PV也是集群中的一种资源。它也像Volume一样,是一种volume插件,但是它的生命周期却是和使用它的Pod相互独立的。PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节。
PersistentVolumeClaim(持久卷声明,简称PVC)是用户的一种存储请求。它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源。Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式(可以被映射为一次读写或者多次只读)。
有两种PV提供的方式:静态和动态。
静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,这些存储对于集群用户是可用的。它们存在于Kubernetes API中,并可用于存储使用。
动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass。
PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态。
使用
Pod使用PVC就像使用volume一样。集群检查PVC,查找绑定的PV,并映射PV给Pod。对于支持多种访问模式的PV,用户可以指定想用的模式。一旦用户拥有了一个PVC,并且PVC被绑定,那么只要用户还需要,PV就一直属于这个用户。用户调度Pod,通过在Pod的volume块中包含PVC来访问PV。
释放
当用户使用PV完毕后,他们可以通过API来删除PVC对象。当PVC被删除后,对应的PV就被认为是已经是“released”了,但还不能再给另外一个PVC使用。前一个PVC的属于还存在于该PV中,必须根据策略来处理掉。
回收
PV的回收策略告诉集群,在PV被释放之后集群应该如何处理该PV。当前,PV可以被Retained(保留)、 Recycled(再利用)或者Deleted(删除)。保留允许手动地再次声明资源。对于支持删除操作的PV卷,删除操作会从Kubernetes中移除PV对象,还有对应的外部存储(如AWS EBS,GCE PD,Azure Disk,或者Cinder volume)。动态供给的卷总是会被删除。
创建一个NFS PV
vim pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
path: /nfs
server: 192.168.43.250
kubectl create -f pv1.yaml
kubectl get pv
对照上面我们创建的pv,可以观察到它有好多条目,像NAME、STATUS等,接下来了解一下其中的一部分
访问模式(ACCESS)
ReadWriteOnce – 该volume只能被单个节点以读写的方式映射
ReadOnlyMany – 该volume可以被多个节点以只读方式映射
ReadWriteMany – 该volume可以被多个节点以读写的方式映射
回收策略(RECLAIM POLICY)
状态(STATUS)
NFS持久化存储示例:
vim pv1.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs
server: 192.168.43.250
vim pv2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfs
server: 192.168.43.250
vim pv3.yanl
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Delete
storageClassName: nfs
nfs:
path: /nfs
server: 192.168.43.250
kubectl create -f pv1.yaml
kubectl create -f pv2.yaml
kubectl create -f pv3.yaml
vim pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubect create -f pvc1.yaml
kubectl get pvc
kubectl get pv
vim pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
kubect create -f pvc1.yaml
kubectl get pvc
kubectl get pv
vim pvc1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
kubect create -f pvc1.yaml
kubectl get pvc
kubectl get pv
注意:从上面的示例中可以得到一个小结论,PVC和PV想要进行绑定,那么集群中必须存在同时满足PVC所有条件的PV,否则将不能够进行绑定,PVC将处于Pending状态
kubectl get pvc
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: reg.westos.org:5000/nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: storage1
volumes:
- name: storage1
persistentVolumeClaim:
claimName: pvc1
kubectl create -f pod.yaml
kubectl get pod
kubectl describe pod my-pod
kubectl exec my-pod -it -- bash
kubectl get pod -o wide
kubectl delete -f pod my-pod
vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: reg.westos.org:5000/nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: storage1
volumes:
- name: storage1
persistentVolumeClaim:
claimName: pvc1
kubectl create -f pod.yaml
kubectl get pod
kubectl get pod -o wide
StorageClass提供了一种描述存储类(class)的方法,不同的class可能会映射到不同的服务质量等级和备份策略或其他策略等。
每个 StorageClass 都包含 provisioner、parameters 和 reclaimPolicy 字段, 这些字段会在StorageClass需要动态分配 PersistentVolume 时会使用到。
StorageClass的属性
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
PV以 ${namespace}-${pvcName}-${pvName}的命名格式提供(在NFS服务器上)
PV回收的时候以 archieved-${namespace}-${pvcName}-${pvName} 的命名格式(在NFS服务器上)
NFS动态分配PV示例:
vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: reg.westos.org:5000/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: westos.org/nfs
- name: NFS_SERVER
value: 192.168.43.250
- name: NFS_PATH
value: /nfs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.43.250
path: /nfs
vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: westos.org/nfs
parameters:
archiveOnDelete: "false"
kubectl apply -y .
kubectl get pod
kubectl get all
步骤二:创建PVC,查看自动生成的PV,以及在nfs服务器上生成的目录
vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
kubectl apply -f pvc.yaml
kubectl get pvc
kubectl get pv
在nfs服务器上
cd /nfs
ls #查看生成的目录,也就是我们的调度器在nfs上调度的外部存储空间
步骤三:删除刚刚创建的PVC,则相应的PV也会自动删除,且在nfs服务器上的外部存储资源也会自动删除
kubectl delete -f pvc.yaml
kubectl get pvc
kubectl get pv
注意:因为我们在class.yaml中设定了删除PVC后,不在外部nfs服务器上缓存其之前保留的数据,所以当PVC被删除后,它也会自动删除,当然我们也可以修改class.yaml中的参数,使得当PVC被删除后,外部nfs服务器上仍然保留其缓存的资源
vim class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: westos.org/nfs
parameters:
archiveOnDelete: "true" #修改参数为true,这样就能给够在外部的存储上缓存数据
kubectl apply -f class.yaml
kubectl apply -f pvc.yaml
kubectl get pv
kubectl get pvc
在nfs服务器上
cd /nfs
ls
kubectl delete -f pvc
kubectl get pv
kubectl get pvc
在nfs服务器上
cd /nfs
ls
vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: reg.westos.org:5000/nginx
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
kubectl apply -f pvc.yaml
kubectl apply -f pod.yaml
kubectl get pvc
kubectl get pv
kubectl get pod
kubectl describe pod test-pod
vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
# annotations:
# volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
kubectl apply -f pvc.yaml
kubectl get pvc
kubectl get pv
解决方法:
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' #在class中添加一个参数,即将当前的StorageClass设定为Default
kubectl get sc
kubectl get pvc
kubectl get pv
vim service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
kubectl apply -f service.yaml
kubectl get svc
kubectl describe service nginx
vim deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
ports:
- containerPort: 80
name: web
kubectl apply -f deployment.yaml
kubectl get pod -o wide
kubectl describe service nginx
statefulset控制器专门针对有状态的应用,它在启动或关闭pod的时候有先后顺序
kubectl delete -f deployment.yaml
kubectl apply -f deployment.yaml
kubectl get pod
StatefulSet将应用状态抽象成了两种情况:
StatefulSet给所有的Pod进行了编号,编号规则是:$(statefulset名称)-$
(序号),从0开始。
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的“名字+编号”的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,即Pod对应的DNS记录。
cat deployment.yaml
kubectl apply -f deployment.yaml
kubectl get pod
kubectl get pod -o wide
kubectl run -it test --image=reg.westos.org:5000/busyboxplus
nslookup web-0.nginx.default.svc.cluster.local
nslookup web-1.nginx.default.svc.cluster.local
vim deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: reg.westos.org:5000/nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
kubectl get pvc
kubectl get pv
kubectl get sc
kubectl apply -f deployment.yaml
kubectl get pod
kubectl get pvc
kubectl get pv
StatefulSet还会为每一个Pod分配并创建一个同样编号的PVC。这样,kubernetes就可以通过Persistent Volume机制为这个PVC绑定对应的PV,从而保证每一个Pod都拥有一个独立的Volume
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
$ kubectl get statefulsets
方法一:改变StatefulSet副本数量
$ kubectl scale statefulsets --replicas=
方法二:如果StatefulSet开始由 kubectl apply 或 kubectl create --save-config 创建,更新StatefulSet manifests中的 .spec.replicas, 然后执行命令 kubectl apply
$ kubectl apply -f
方法三:通过命令 kubectl edit 编辑该字段
$ kubectl edit statefulsets
方法四: 使用 kubectl patch
$ kubectl patch statefulsets -p '{"spec":{"replicas":}}'
cat deployment.yaml
kubectl apply -f deployment.yaml
kubectl get pod
kubectl describe service nginx
kubectl get pvc
kubectl get pv