docker存储----k8s存储
docker的容器层可以提供存储:存储在可写层(CopyOnWrite)
docker的数据持久化解决方案:
data volume.---->1、bind mount 2、docker managervolume
其实两者没有什么显著差别,不同点在于,是否需要提前准备dockerHost上的相关文件或目录
Volume
与docker类似:将宿主机上的某个文件或者目录挂载到Pod内
emptyDir
举个栗子:
[root@master ~]# vim emptyDir.yaml
kind: Pod
apiVersion: v1
metadata:
name: producer-consumer
spec:
containers:
- name: producer
image: busybox
volumeMounts:
- name: shared-volume
mountPath: /producer_dir #容器内的路径
args:
- /bin/sh
- -c
- echo "hello world" > /producer_dir/hello.txt; sleep 30000
- name: consumer
image: busybox
volumeMounts:
- name: shared-volume
mountPath: /consumer_dir
args:
- /bin/sh
- -c
- cat /consumer_dir/hello.txt; sleep 30000
volumes:
- name: shared-volume
emptyDir: {}
总结:根据上述yaml文件分析,volumes是指k8s的存储方案.容器内volumeMounts使用的是volumes内定义的存储,所以现在可以理解为,volumes定义的dockerHost上的目录或文件,分别挂载到了producer(/producer_dir)和consumer(/consumer_dir)这个两个容器的对应目录。可以判断出在consumer这个容器的/consumer_dir目录下,应该也会有一个hello.txt的文件。
//验证查看conumer容器的日志
[root@master yaml]# kubectl logs producer-consumer consumer
hello world
//查看一个Pod运行在了那个Node节点上?
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
producer-consumer 2/2 Running 0 102s 10.244.1.2 node01 <none> <none>
//到对应Node节点上查看该容器的详细信息(Mounts)
在对应节点docker ps 查看容器名称 找到容器名称后:
[root@node01 ~]# docker inspect k8s_consumer_producer-consumer_default_a7c (容器名称)
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/a7c2c37b-f1cf-4777-a37b-d3
"Destination": "/consumer_dir",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
PS: 查看到该容器的Mounts字段,等于是运行docker容器的时候使用这么一条命令:
docker run -v /producer_dir busybox
emptyDir的使用场景:如果Pod的被删除,那么数据也会被删除,不具备持计化。Pod内的容器,需要共享数据卷的时候,使用的临时数据卷。
HostPath
相对于emtpyDir来说,hostPath就等于运行容器是使用的命令:
docker run -v /host/path:/container/path
//这里没有创建新的yaml文件,直接将emptyDir.yaml文件的volumes字段更改为:hostPath.
[root@master yaml]# mkdir /data/hostPath -p
kind: Pod
apiVersion: v1
metadata:
name: producer-consumer
spec:
containers:
- name: producer
image: busybox
volumeMounts:
- name: shared-volume
mountPath: /producer_dir #容器内的路径
args:
- /bin/sh
- -c
- echo "hello world" > /producer_dir/hello.txt; sleep 30000
- name: consumer
image: busybox
volumeMounts:
- name: shared-volume
mountPath: /consumer_dir
args:
- /bin/sh
- -c
- cat /consumer_dir/hello.txt; sleep 30000
volumes:
- name: shared-volume
hostPath:
path: "/data/hostPath"
总结: HostPath这种方式相比较emptyDir来说,持久化功能强,但它实际增加了Pod与host的耦合,所以大部分资源不会使用这种持久化方案,通常关于docker或者k8s集群本身才会使用
PV、PVC
PV:Persisten(持久的、稳固的)Volume
是k8s集群的外部存储系统,一般是设定好的存储空间(文件系统中的一个目录)
PVC: PersistenvolumeClaim(声明、申请)
如果应用需要用到持久化的时候,可以直接向PV申请空间。
基于NFS服务来创建的PV
主机名 | IP地址 |
---|---|
master | 192.168.1.20 |
node01 | 192.168.1.21 |
node02 | 192.168.1.22 |
//3台节点都安装nfs-工具包和rpc-bind服务。
[root@master ~]# yum -y install nfs-utils rpcbind
[root@node01 ~]# yum -y install nfs-utils rpcbind
[root@node02 ~]# yum -y install nfs-utils rpcbind
//这里准备将NFS服务部署在master节点上,需要在master节点上提前规划好共享的目录
[root@master ~]# mkdir /nfsdata
[root@master ~]# vim /etc/exeports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable nfs-server
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node01 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
//创建pv.yaml文件
[root@master ~]# mkdir yaml
[root@master ~]# mkdir -p /nfsdata/pv1
[root@master ~]# cd yaml
[root@master yaml]# vim pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv1
spec:
capacity: #PV的容量
storage: 1Gi
accessModes: #访问模式
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 192.168.1.20
[root@master yaml]# kubectl apply -f pv.yaml
[root@master yaml]# kubectl get pv
PS: pv所支持的访问模式:
ReadWriteOnce: PV能以read-write的模式mount到单个节点。
ReadOnlyMany: PV能以read-only 的模式mount到多个节点。
ReadWriteMany: PV能以read-write的模式Mount到多个节点。
persistentVolumeReclaimPolicy:PV空间的回收策略:
Recycle: 会清除数据,自动回收。
Retain: 需要手动清理回收。
Delete: 云存储专用的回收空间使用命令。
//创建一个PVC,向刚才的PV申请使用空间,注意,这里PV与PVC的绑定,通过storageClassName和accessModes这两个字段共同决定
[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
[root@master yaml]# kubectl apply -f pvc.yaml
kubectl get pvc可以查看到pv和pvc关联上了
[root@master yaml]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc1 Bound pv1 1Gi RWO nfs 7s
总结:
1.当系统中的PV被绑定之后,就不会被其他PVC所绑定了。
2.如果系统中有多个可以满足PVC要求的PV,则系统会自动选择一个符合PVC申请空间大小的PV进行绑定,尽量不浪费存储空间。
//创建一个Pod,来使用上述PVC
[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod1
spec:
containers:
- name: pod1
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- name: mydata
mountPath: "/data"
volumes:
- name: mydata
persistentVolumeClaim:
claimName: pvc1
[root@master yaml]# kubectl apply -f pod.yaml
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod1 1/1 Running 0 43s
注意: 一定要提前创好pv.yaml文件nfs字段下path:的目录,否则pod创建失败
//验证,/data/pv1目录与Pod内"/data"目录的一致性
[root@master yaml]# cd /nfsdata/pv1/
[root@master pv1]# echo "hello" > test.txt
[root@master pv1]# kubectl exec pod1 cat /data/test.txt
hello
PV的空间回收
当回收策略为; recycle
[root@master ~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv1 1Gi RWO Recycle Bound default/pvc1 nfs 20m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc1 Bound pv1 1Gi RWO nfs 17m
// 验证dockerhost上PV上存放的数据
[root@master ~]# ls /nfsdata/pv1/
test.txt
//删除Pod资源,PVC
[root@master ~]# kubectl delete pod pod1
pod "pod1" deleted
[root@master ~]# kubectl delete pvc pvc1
persistentvolumeclaim "pvc1" deleted
//查看PV的过程,Released(释放)—>Available(可用)
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWO Recycle Released default/pvc1 nfs 25m
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 1Gi RWO Recycle Available nfs 25m
//验证,数据依然被删除
[root@master ~]# ls /nfsdata/pv1/
PS: 在释放空间的过程中,其实K8S生成了一个新的Pod,由这个Pod执行删除数据的操作。
当回收策略为: Retian
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain //更改回收策略
storageClassName: nfs
nfs:
//重新运行pv,pvc,pod资源
[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv1 created
[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc1 created
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod1 created
然后在Pod内,创建对应的资源,再尝试删除PVC,和Pod,验证PV目录下,数据是否还会存在?
[root@master yaml]# cd /nfsdata/pv1/
[root@master pv1]# echo "hi" > test.txt
[root@master pv1]# kubectl exec pod1 cat /data/test.txt
hi
[root@master pv1]# kubectl delete pod pod1
pod "pod1" deleted
[root@master pv1]# kubectl delete pvc pvc1
persistentvolumeclaim "pvc1" deleted
[root@master pv1]# ls
test.txt
数据依旧存在
PV,PVC的运用
现在部署一个MySQL服务,并且将MySQL的数据进行持久化存储。
1、创建PV,PVC
[root@master pv]# vim pvmysql.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 192.168.1.20
[root@master pv]# vim pvcmysql.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
storageClassName: nfs
查看
[root@master yaml]# kubectl apply -f pvmysql.yaml
persistentvolume/mysql-pv created
[root@master yaml]# kubectl apply -f pvcmysql.yaml
persistentvolumeclaim/mysql-pvc created
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mysql-pv 1Gi ROX Recycle Bound default/mysql-pvc nfs 84s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-pvc Bound mysql-pv 1Gi ROX nfs 80s
2、部署MySQL
[root@master pv]# vim mysql.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mysql
spec:
template:
metadata:
labels:
test: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: 123.com
volumeMounts:
- name: mysql-test
mountPath: /var/lib/mysql
volumes:
- name: mysql-test
persistentVolumeClaim:
claimName: mysql-pvc
---
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: NodePort
selector:
test: mysql
ports:
- port: 3306
targetPort: 3306
nodePort: 31306
查看
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-5d6667c5b-f4ttx 1/1 Running 0 11s
在MYSQL数据库中添加数据
[root@master pv]# kubectl exec -it mysql-5d6667c5b-bw4cp bash
root@mysql-5d6667c5b-bw4cp:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> SHOW DATABASES; //查看当前的库。
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.04 sec)
mysql> CREATE DATABASE TEST; //创建test库
Query OK, 1 row affected (0.02 sec)
mysql> USE TEST; //选择使用test库
Database changed
mysql> SHOW TABLES; //查看test库中的表
Empty set (0.00 sec)
mysql> CREATE TABLE my_id(id int(4)); //创建my_id表
Query OK, 0 rows affected (0.03 sec)
mysql> INSERT my_id values (9527); //往my_id表中插入数据
Query OK, 1 row affected (0.02 sec)
mysql> SELECT * FROM my_id; //查看my_id表中所有数
+------+
| id |
+------+
| 9527 |
+------+
1 row in set (0.00 sec)
mysql> exit
模拟mysql故障转移
先查看运行mysql服务的Pod,在哪个节点,然后将该节点挂起,该节点的容器就会转移到正常运行的节点
[root@master pv]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 35d v1.15.0
node01 Ready <none> 35d v1.15.0
node02 NotReady <none> 35d v1.15.0
节点已关闭
[root@master pv]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-5d6667c5b-5pnm6 1/1 Running 0 13m 10.244.1.6 node01 <none> <none>
mysql-5d6667c5b-bw4cp 1/1 Terminating 0 52m 10.244.2.4 node02 <none> <none>
结果:
已关闭节点显示停止,在正常运行节点新生成一个容器并且正常运行
新生成Pod后,进入新Pod验证数据是否会存在
[root@master pv]# kubectl exec -it mysql-5d6667c5b-5pnm6 bash
root@mysql-5d6667c5b-5pnm6:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| TEST |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.04 sec)
mysql> use TEST;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+----------------+
| Tables_in_TEST |
+----------------+
| my_id |
+----------------+
1 row in set (0.00 sec)
mysql> select * from my_id;
+------+
| id |
+------+
| 9527 |
+------+
1 row in set (0.01 sec)
mysql> exit
数据存在
存储
简单一点理解: storageclass能够帮组我们自动的创建PV
Provisioner: 提供者(存储提供者)
创建共享目录
[root@master ~]# mkdir /nfsdata
一、开启NFS
[root@master ~]# yum -y install nfs-utils rpcbind
[root@node01 ~]# yum -y install nfs-utils rpcbind
[root@node02 ~]# yum -y install nfs-utils rpcbind
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
[root@node01 ~]# systemctl start rpcbind
[root@node01 ~]# systemctl enable rpcbind
[root@node01 ~]# systemctl start nfs-server
[root@node01 ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@node01 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
[root@node02 ~]# systemctl start rpcbind
[root@node02 ~]# systemctl enable rpcbind
[root@node02 ~]# systemctl start nfs-server
[root@node02 ~]# systemctl enable nfs-server
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@node02 ~]# showmount -e 192.168.1.20
Export list for 192.168.1.20:
/nfsdata *
二、创建名称空间
[root@master yaml]# vim ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: bdqn
运行,查看
[root@master yaml]# kubectl apply -f ns.yaml
namespace/bdqn created
[root@master yaml]# kubectl get ns
NAME STATUS AGE
bdqn Active 11s
三、开启rbac权限 (mkdir /yaml文件目录)
PS: RBAC基于角色的访问控制–全拼Role-Based Access Control
[root@master yaml]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: bdqn
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
运行
[root@master yaml]# kubectl apply -f rbac.yaml
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
四、创建nfs-deployment.yaml
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: bdqn
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn-test
- name: NFS_SERVER
value: 192.168.1.20
- name: NFS_PATH
value: /nfsdata
运行,查看
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner created
[root@master yaml]# kubectl get deployments -n bdqn
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 54s
五、创建storageclass资源
[root@master yaml]# vim storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storageclass
namespace: bdqn
provisioner: bdqn-test
reclaimPolicy: Retain
运行,查看
[root@master yaml]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/storageclass created
[root@master yaml]# kubectl get storageclasses -n bdqn
NAME PROVISIONER AGE
storageclass bdqn-test 21s
六、创建PVC验证
[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: bdqn
spec:
storageClassName: storageclass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
运行
[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/test-pvc created
查看
[root@master yaml]# kubectl get pvc -n bdqn
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-pvc Bound pvc-b68ee3fa-8779-4aaa-90cf-eea914366441 200Mi RWO storageclass 23s
PS:这里显示pvc和storageclass自动创建的pv关联上了
[root@master yaml]# ls /nfsdata/
bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441
测试
[root@master nfsdata]# cd bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441/
[root@master bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441]# echo hello > index.html
[root@master nfsdata]# kubectl exec -it -n bdqn nfs-client-provisioner-856d966889-s7p2j sh
~ # cd /persistentvolumes/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914
366441/
/persistentvolumes/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441 # cat index.html
hello
七、创建一个Pod测试
[root@master yaml]# vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: bdqn
spec:
containers:
- name: test-pod
image: busybox
args:
- /bin/sh
- -c
- sleep 3000
volumeMounts:
- name: nfs-pv
mountPath: /test
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: test-pvc
运行,查看
[root@master yaml]# kubectl apply -f pod.yaml
pod/test-pod created
[root@master yaml]# kubectl get pod -n bdqn
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-856d966889-s7p2j 1/1 Running 0 28m
test-pod 1/1 Running 0 25s
测试
[root@master yaml]# ls /nfsdata/
bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441
[root@master yaml]# echo 123456 > /nfsdata/bdqn-test-pvc-pvc-b68ee3fa-8779-4aaa-90cf-eea914366441/test.txt
[root@master yaml]# kubectl exec -n bdqn test-pod cat /test/test.txt
123456
如果用deployment测试yaml该怎么写? 并且用私有镜像
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: deployment-svc
spec:
replicas: 1
template:
metadata:
labels:
app: httpd
spec:
containers:
- name: deployment-svc
image: 192.168.229.187:5000/httpd:v1
volumeMounts:
- name: nfs-pv
mountPath: /usr/local/apache2/htdocs
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: test-pvc
---
kind: Service
apiVersion: v1
metadata:
name: test-svc
spec:
type: NodePort
selector:
app: httpd
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
概述
StatefulSet概述RC、Deployment、DaemonSet都是面向无状态的服务,它们所管理的Pod的IP、名字,启停顺序等都是随机的,而StatefulSet是什么?顾名思义,有状态的集合,管理所有有状态的服务,比如MySQL、MongoDB集群等。
StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必须要用到共享存储。
在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service,headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该Headless Service对应的全部Pod的Endpoint列表。
除此之外,StatefulSet在Headless Service的基础上又为StatefulSet控制的每个Pod副本创建了一个DNS域名,这个域名的格式为:
$(podname).(headless server name)
FQDN: $(podname).(headless servername).namespace.svc.cluster.local
运行有状态的服务
通过一个栗子:
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: myweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myweb
image: nginx
运行
[root@master yaml]# kubectl apply -f statefulset.yaml
service/headless-svc created
statefulset.apps/statefulset created
查看
[root@master yaml]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
statefulset-0 0/1 ContainerCreating 0 13s
statefulset-0 1/1 Running 0 36s
statefulset-1 0/1 Pending 0 0s
statefulset-1 0/1 Pending 0 0s
statefulset-1 0/1 ContainerCreating 0 0s
statefulset-1 1/1 Running 0 47s
statefulset-2 0/1 Pending 0 0s
statefulset-2 0/1 Pending 0 0s
statefulset-2 0/1 ContainerCreating 0 0s
statefulset-2 1/1 Running 0 17s
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
statefulset-0 1/1 Running 0 2m23s
statefulset-1 1/1 Running 0 107s
statefulset-2 1/1 Running 0 60s
什么是 headless service 无头服务?
在用Deployment时,每一个Pod名称是没有顺序的,是随机字符串,因此是Pod名称是无序的,但是在statefulset中要求必须是有序,每一个pod不能被随意取代,pod重建后pod名称还是一样的。而pod IP是变化的,所以是以Pod名称来识别。pod名称是pod唯一性的标识符,必须持久稳定有效。这时候要用到无头服务,它可以给每个Pod一个唯一的名称。
什么是volumeClaimTemplate?
对于有状态的副本集都会用到持久存储,对于分布式系统来讲,它的最大特点是数据是不一样的,所以各个节点不能使用同一存储卷,每个节点有自已的专用存储,但是如果在Deployment中的Pod template里定义的存储卷,是所有副本集共用一个存储卷,数据是相同的,因为是基于模板来的,而statefulset中每个Pod都要自已的专有存储卷,所以statefulset的存储卷就不能再用Pod模板来创建了,于是statefulSet使用volumeClaimTemplate,称为卷申请模板,它会为每个Pod生成不同的pvc,并绑定pv,从而实现各pod有专用存储。这就是为什么要用volumeClaimTemplate的原因.
//接着,在上述yaml文件中,添加volumeClaimTemplate字段
一,开启NFS
[root@master ~]# mkdir /nfsdata
[root@master ~]# yum -y install nfs-utils
[root@master ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@master ~]# systemctl start rpcbind
[root@master ~]# systemctl start nfs-server
[root@master ~]# systemctl enable rpcbind
[root@master ~]# systemctl enable nfs-server
[root@master ~]# showmount -e
Export list for master:
/nfsdata *
二,开启rbac权限
[root@master yaml]# vim rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac.yaml
三,创建nfs-deployment.yaml
[root@master yaml]# vim nfs-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: bdqn-test
- name: NFS_SERVER
value: 192.168.1.20
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 192.168.1.20
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deploy.yaml
四,创建storageClass资源
[root@master yaml]# vim storage.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: storageclass
provisioner: bdqn-test
reclaimPolicy: Retain
[root@master yaml]# kubectl apply -f storage.yaml
五,创建statefulSet资源
[root@master yaml]# vim statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: headless-svc
labels:
app: headless-svc
spec:
ports:
- name: myweb
port: 80
selector:
app: headless-pod
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: statefulset
spec:
serviceName: headless-svc
replicas: 3
selector:
matchLabels:
app: headless-pod
template:
metadata:
labels:
app: headless-pod
spec:
containers:
- name: myweb
image: nginx
volumeMounts:
- name: test-storage
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: test-storage
annotations:
volume.beta.kubernetes.io/storage-class: storageclass
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
//写完之后,直接运行,并且,在此之前,我们并没有创建PV,PVC,现在查看集群中的资源,是否有这两种资源?
运行statefulset.yaml文件
[root@master yaml]# kubectl apply -f statefulset.yaml
service/headless-svc created
statefulset.apps/statefulset created
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-95c6b358-e875-42f1-ab40-b740ea2b18db 100Mi RWO Delete Bound default/test-storage-statefulset-2 storageclass 3m45s
pvc-9910b735-b006-4b31-9932-19e679eddae8 100Mi RWO Delete Bound default/test-storage-statefulset-1 storageclass 4m1s
pvc-b68ee3fa-8779-4aaa-90cf-eea914366441 200Mi RWO Delete Released bdqn/test-pvc storageclass 83m
pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598 100Mi RWO Delete Bound default/test-storage-statefulset-0 storageclass 5m17s
[root@master yaml]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-storage-statefulset-0 Bound pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598 100Mi RWO storageclass 33m
test-storage-statefulset-1 Bound pvc-9910b735-b006-4b31-9932-19e679eddae8 100Mi RWO storageclass 4m10s
test-storage-statefulset-2 Bound pvc-95c6b358-e875-42f1-ab40-b740ea2b18db 100Mi RWO storageclass 3m54s
//从上述结果中,我们知道,storageclass为我们自动创建了PV,volumeClaimTemplate为我们自动创建PVC,但是否能够满足我们所说的,每一个Pod都有自己独有的数据持久化目录,也就是说,每一个Pod内的数据都是不一样的。
//分别在对应的PV下,模拟创建不同的数据。
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-856d966889-xrfgf 1/1 Running 0 5m51s
statefulset-0 1/1 Running 0 4m22s
statefulset-1 1/1 Running 0 3m56s
statefulset-2 1/1 Running 0 3m40s
[root@master yaml]# kubectl exec -it statefulset-0 bash
root@statefulset-0:/# echo 00000 > /usr/share/nginx/html/index.html
root@statefulset-0:/# exit
exit
[root@master yaml]# kubectl exec -it statefulset-1 bash
root@statefulset-1:/# echo 11111 > /usr/share/nginx/html/index.html
root@statefulset-1:/# exit
exit
[root@master yaml]# kubectl exec -it statefulset-2 bash
root@statefulset-2:/# echo 22222 > /usr/share/nginx/html/index.html
root@statefulset-2:/# exit
exit
查看对应Pod的数据持久化目录,可以看出,每个Pod的内容都不一样
[root@master ~]# cd /nfsdata/
[root@master nfsdata]# ls
default-test-storage-statefulset-0-pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598
default-test-storage-statefulset-1-pvc-9910b735-b006-4b31-9932-19e679eddae8
default-test-storage-statefulset-2-pvc-95c6b358-e875-42f1-ab40-b740ea2b18db
[root@master nfsdata]# cat default-test-storage-statefulset-0-pvc-fc7c8560-0a1e-4ee5-8d50-d6528dccd598/index.html
00000
[root@master nfsdata]# cat default-test-storage-statefulset-1-pvc-9910b735-b006-4b31-9932-19e679eddae8/index.html
11111
[root@master nfsdata]# cat default-test-storage-statefulset-2-pvc-95c6b358-e875-42f1-ab40-b740ea2b18db/index.html
22222
即使删除Pod,然后statefulSet这个Pod控制器会生成一个新的Pod,这里不看Pod的IP,名称肯定和之前的一致,而且,最主要是持久化的数据仍然存在。
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-856d966889-xrfgf 1/1 Running 0 21m 10.244.1.5 node01 <none> <none>
statefulset-0 1/1 Running 0 20m 10.244.2.5 node02 <none> <none>
statefulset-1 1/1 Running 0 19m 10.244.1.6 node01 <none> <none>
statefulset-2 1/1 Running 0 41s 10.244.2.7 node02 <none> <none>
[root@master yaml]# kubectl delete pod statefulset-2
pod "statefulset-2" deleted
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-856d966889-xrfgf 1/1 Running 0 21m
statefulset-0 1/1 Running 0 20m
statefulset-1 1/1 Running 0 19m
statefulset-2 0/1 ContainerCreating 0 5s
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-856d966889-xrfgf 1/1 Running 0 22m
statefulset-0 1/1 Running 0 20m
statefulset-1 1/1 Running 0 20m
statefulset-2 1/1 Running 0 17s
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-856d966889-xrfgf 1/1 Running 0 23m 10.244.1.5 node01 <none> <none>
statefulset-0 1/1 Running 0 21m 10.244.2.5 node02 <none> <none>
statefulset-1 1/1 Running 0 21m 10.244.1.6 node01 <none> <none>
statefulset-2 1/1 Running 0 99s 10.244.2.8 node02 <none> <none>
查看数据存在
[root@master yaml]# curl 10.244.2.8
22222