多节点部署k8s(12):六种资源控制器的详解和用法

pod的分类:
自主式pod:pod退出,此类型的pod不会被创建
控制器管理的pod:在控制器的生命周期里,始终要维持pod的副本数量

一、什么是控制器

	k8s中内内建了很多controller(控制器),这些相当于一个状态机,用于控制pod的具体状态和行为

二、控制器的类型

	1、ReplicationController和ReplicaSet
	2、Deployment
	3、DaemonSet
	4、StateFilSet
	5、job/Cornjob
	6、Horizontal Pod Autoscaling

三、各种控制器的作用

1、RC和RS

		SetReplication Controller(RC) 用来确保容器应用的副本数始终保持在用户定义的副本数, 即如果有容器异常退出, 会自动创建新的Pod来替代; 而如果异常多出来的容器也会自动回收;
		在新版本的k8s中建议使用Replica Set来取代Replication Controller.Replica Set跟Replication Controller没有本质的不同, 只是名字不一样, 并且Replica Set支持集合式的selector(通过标签实现);

2、Deployment

		Deployment为Pod和Replica Set提供了一个声明式定义(declarative) 方法, 用来替代以前的Replication Controller来方便的管理应用。
		典型的应用场景包括:
			定义Deployment来创建Pod和ReplicaSet
			滚动升级和回滚应用
			扩容和缩容
			暂停和继续Deployment
		声明式(deployment):apply(优)	create
		命令式(rs):			create(优)	apply

3、DaemonSet

		Daemon Set确保全部(或者一些) Node上运行一个Pod的副本, 当有Node加入集群时, 也会为他们新增一个Pod, 当有Node从集群移除时, 这些Pod也会被回收, 删除Daemon Set将会删除它创建的所有Pod
		使用Daemon Set的一些典型用法:
			运行集群有储daemon, 例如在每个Node上运行glusterd,ceph

			在每个Node上运行日志收集daemon, 例如fluentd,log stash
			在每个Node上运行监控daemon, 例如Promeneus Node Exporter、collectd、Datadog代理、NewRelic代理、Ganglia gmond
		注:daemonset并不会定义超过一个以上的pod副本,若要定义多个pod副本,需要定义多个daemonset的方式定义pod

4、Job

		Job负责批处理任务, 即仅执行一次的任务, 它保证批处理任务的一个或多个Pod成功结束

5、CronJob

		CronJob管理基于时间的Job, 即:
			在给定时间点只运行一次
			周期性地在给定时间点运行
		使用前提条件:当前使用的Kubernetes集群,版本>=1.8(对CronJob) , 对于先前版本的集群, 版本<1.8, 启动API Server时, 通过传递选项--runtime-config=batch/v2alpha1=true可以开启batch/v2aipha 1API

		典型的用法如下所示:
			在给定的时间点调度job运行
			创建周期性运行的Job, 例如:数据库备份、发送邮件

6、StatefulSet

		StatefulSet作为Controller为Pod提供唯一的标识, 它可以保证部署和scale的顺序
		StatefulSet是为了解决有状态服务的问题(对应Deployments和Replica Sets是为无状态服务而设计)
			其应用场景包括:
				稳定的持久化存储, 即Pod重新调度后还是能访问到相同的持久化数据, 基于PVC来实现

				稳定的网络标志, 即Pod重新调度后其Pod Name和HostName不变, 基于Headless Service(即没有ip地址和端口的ClusterIP) 来实现
				
				有序部暑, 有序扩展, 即Pod是有顺序的, 在部若或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1, 在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态) , 基于initcontainers来实现
				
				有序收缩,有序删除(即从N-1到0)			

7、Horizontal Pod Auto scaling

		应用的资源使用率通常都有高峰和低谷的时候, 如何削峰填谷, 提高集群的整体资源利用率,让service中的Pod个数自动调整呢?这就有赖于Horizontal Pod Auto scaling了, 顾名思义, 使Pod水平自动缩笠

四、各种控制器的使用方法

1、RS

		1)编辑yaml文件,并应用
			[root@k8s-master1 ~]# vim ./rs.yaml 
			apiVersion: extensions/v1beta1
			kind: ReplicaSet
			metadata:
			  name: nginx
			spec:
			  replicas: 2
			  selector:
			    matchLabels:
			      app: nginx
			  template:
			    metadata:
			      labels:
			        app: nginx
			    spec:
			      containers:
			      - name: nginx
			        image: hub.iso.com/xitong/nginx
			        imagePullPolicy: IfNotPresent
			        env:
			        - name: GET_HOST_FROM
			          value: dns
			        ports:
			        - containerPort: 80
		2)查看pod的标签
			[root@k8s-master1 ~]# kubectl get pod --show-labels
			NAME          READY   STATUS    RESTARTS   AGE     LABELS
			nginx-4pnfz   1/1     Running   0          3m18s   app=nginx
			nginx-6jf6x   1/1     Running   0          3m18s   app=nginx
		3)查看rs的标签
			[root@k8s-master1 ~]# kubectl get rs --show-labels
			NAME    DESIRED   CURRENT   READY   AGE   LABELS
			nginx   2         2         2       4m    app=nginx
		4)修改pod的标签,并查看是否产生新的pod
			[root@k8s-master1 ~]# kubectl label pod  nginx-4pnfz app=nginxs --overwrite=True
			[root@k8s-master1 ~]# kubectl get pod --show-labels
			NAME          READY   STATUS    RESTARTS   AGE   LABELS
			nginx-4pnfz   1/1     Running   0          11m   app=nginxs
			nginx-6jf6x   1/1     Running   0          11m   app=nginx
			nginx-ghl62   1/1     Running   0          85s   app=nginx
		5)删除rs,查看更改标签后的pod是否会被删除(不会)
			[root@k8s-master1 ~]# kubectl delete rs --all
				replicaset.extensions "nginx" deleted
			[root@k8s-master1 ~]# kubectl get pod --show-labels
			NAME          READY   STATUS        RESTARTS   AGE    LABELS
			nginx-4pnfz   1/1     Running       0          11m    app=nginxs
			nginx-6jf6x   0/1     Terminating   0          11m    app=nginx
			nginx-ghl62   0/1     Terminating   0          118s   app=nginx
2、Deployment
		1)编辑yaml文件,并运行
			[root@k8s-master1 ~]# vim deployment.yaml 
				apiVersion: extensions/v1beta1
				kind: Deployment
				metadata:
				  name: nginx-deployment
				spec:
				  replicas: 2
				  template:
				    metadata:
				      labels:
				        app: nginx
				    spec:
				      containers:
				      - name: nginx
				        image: hub.iso.com/xitong/nginx:v1
				        imagePullPolicy: IfNotPresent
				        ports:
				        - containerPort: 80
			[root@k8s-master1 ~]# kubectl apply -f deployment.yaml 
				deployment.extensions/nginx-deployment created
			[root@k8s-master1 ~]# kubectl get pod
				NAME                                READY   STATUS    RESTARTS   AGE
				nginx-deployment-6cc7fdf549-j48mn   1/1     Running   0          6s
				nginx-deployment-6cc7fdf549-n9qck   1/1     Running   0          6s
			#查看网页内容
			[root@k8s-master1 ~]# kubectl get pod -o wide
				NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
				nginx-deployment-6cc7fdf549-j48mn   1/1     Running   0          97s   172.18.0.3   192.168.100.40   >           >
				nginx-deployment-6cc7fdf549-n9qck   1/1     Running   0          97s   172.18.0.3   192.168.100.30   >           >
			[root@k8s-node1 ~]# curl http://172.18.0.3
				-e v1
		2)扩容
			[root@k8s-master1 ~]# kubectl get deployment
				NAME               READY   UP-TO-DATE   AVAILABLE   AGE
				nginx-deployment   2/2     2            2           3m58s
			[root@k8s-master1 ~]# kubectl scale deployment nginx-deployment --replicas 5
			[root@k8s-master1 ~]# kubectl get pod
				NAME                                READY   STATUS    RESTARTS   AGE
				nginx-deployment-6cc7fdf549-7bjhn   1/1     Running   0          8s
				nginx-deployment-6cc7fdf549-h4p4m   1/1     Running   0          8s
				nginx-deployment-6cc7fdf549-j48mn   1/1     Running   0          4m52s
				nginx-deployment-6cc7fdf549-n9qck   1/1     Running   0          4m52s
				nginx-deployment-6cc7fdf549-ncxt7   1/1     Running   0          8s
		3)更新镜像
			[root@k8s-master1 ~]# kubectl delete deployment --all
			[root@k8s-master1 ~]# kubectl apply -f deployment.yaml --record
			[root@k8s-master1 ~]# kubectl set image deployment/nginx-deployment nginx=hub.iso.com/xitong/nginx:v2
			[root@k8s-master1 ~]# kubectl get rs
				NAME                          DESIRED   CURRENT   READY   AGE
				nginx-deployment-6cc7fdf549   0         0         0       3m22s
				nginx-deployment-b75d69456    2         2         2       25s
			[root@k8s-master1 ~]# kubectl get pod -o wide
				NAME                               READY   STATUS    RESTARTS   AGE    IP           NODE             NOMINATED NODE   READINESS GATES
				nginx-deployment-b75d69456-cwch5   1/1     Running   0          110s   172.18.0.3   192.168.100.40   >           >
				nginx-deployment-b75d69456-sgtp6   1/1     Running   0          110s   172.18.0.4   192.168.100.30   >           >
			[root@k8s-node1 ~]# curl http://172.18.0.4
				-e v2
		4)回滚
			[root@k8s-master1 ~]# kubectl rollout undo deployment/nginx-deployment
			[root@k8s-master1 ~]# kubectl rollout status deployment nginx-deployment
			[root@k8s-master1 ~]# kubectl get rs
				NAME                          DESIRED   CURRENT   READY   AGE
				nginx-deployment-6cc7fdf549   2         2         2       10m
				nginx-deployment-b75d69456    0         0         0       7m53s
			#拓展:
			升级:
			[root@k8s-master1 ~]# kubectl set image deployment/nginx-deployment nginx=hub.iso.com/xitong/nginx:v3
			[root@k8s-master1 ~]# kubectl get pod -o wide
				NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES
				nginx-deployment-69b5b599f9-cn2xz   1/1     Running   0          13s   172.18.0.4   192.168.100.40   >           >
				nginx-deployment-69b5b599f9-jzrgx   1/1     Running   0          13s   172.18.0.3   192.168.100.30   >           >
			[root@k8s-node1 ~]# curl http://172.18.0.3
				-e v3
			[root@k8s-master1 ~]# kubectl get rs
				NAME                          DESIRED   CURRENT   READY   AGE
				nginx-deployment-69b5b599f9   2         2         2       60s	(v3)
				nginx-deployment-6cc7fdf549   0         0         0       14m	(v1)
				nginx-deployment-b75d69456    0         0         0       11m	(v2)
			[root@k8s-master1 ~]# kubectl rollout history deployment/nginx-deployment(注意做好记录)
				deployment.extensions/nginx-deployment 
				REVISION  CHANGE-CAUSE
				2         kubectl apply --filename=deployment.yaml --record=true(v2)
				3         kubectl apply --filename=deployment.yaml --record=true(v1)
				4         kubectl apply --filename=deployment.yaml --record=true(v3)
			若要回滚会回到v1版本,若要直接到v2版本
				[root@k8s-master1 ~]# kubectl rollout undo deployment/nginx-deployment --to-revision=2
				[root@k8s-node1 ~]# curl http://172.18.0.4
					-e v2
				[root@k8s-master1 ~]# kubectl rollout history deployment/nginx-deployment
					deployment.extensions/nginx-deployment 
					REVISION  CHANGE-CAUSE
					3         kubectl apply --filename=deployment.yaml --record=true(v1)
					4         kubectl apply --filename=deployment.yaml --record=true(v3)
					5         kubectl apply --filename=deployment.yaml --record=true(v2)
		5)清理Policy
			您可以通过设置.spec.revison History Limit项来指定deployment最多保留多少revision历史记录。默认的会保留所有的revision; 
			如果将该项设置为0, Deployment就不允许回退了

3、DaemonSet

		[root@k8s-master1 ~]# vim daemonset.yaml 
			apiVersion: apps/v1
			kind: DaemonSet
			metadata:
			  name: daemonset-example
			  labels:
			    app: daemonset
			spec:
			  selector:
			    matchLabels:
			      name: daemonset-example
			  template:
			    metadata:
			      labels:
			        name: daemonset-example
			    spec:
			      containers:
			      - name: daemonset
			        image: hub.iso.com/xitong/nginx:v1
			        imagePullPolicy: IfNotPresent
		[root@k8s-master1 ~]# kubectl create -f deployment.yaml
		[root@k8s-master1 ~]# kubectl get daemonset
			NAME                DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
			daemonset-example   2         2         2       2            2           >          13s
		[root@k8s-master1 ~]# kubectl get pod
			NAME                      READY   STATUS    RESTARTS   AGE
			daemonset-example-fsmq7   1/1     Running   0          21s
			daemonset-example-gmc4l   1/1     Running   0          21s

4、Job

		[root@k8s-node1 ~]# docker load -i perl.tar
		[root@k8s-master1 ~]# vim job.yaml 
			apiVersion: batch/v1
			apiVersion: batch/v1
			kind: Job
			metadata:
			  name: pi
			spec:
			  template:
			    metadata:
			      name: pi
			    spec:
			      containers:
			      - name: pi
			        image: perl
			        command: ["perl","-Mbignum=bpi","-wle","print bpi(200)"]
			        imagePullPolicy: IfNotPresent
			      restartPolicy: Never
		[root@k8s-master1 ~]# kubectl create -f job.yaml
        [root@k8s-master1 ~]# kubectl get pod
			NAME       READY   STATUS      RESTARTS   AGE
			pi-t46gc   0/1     Completed   0          49s
		[root@k8s-master1 ~]# kubectl get job
			NAME   COMPLETIONS   DURATION   AGE
			pi     1/1           2s         54s
		[root@k8s-master1 ~]# kubectl log pi-t46gc
			log is DEPRECATED and will be removed in a future version. Use logs instead.
			3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303820

5、CronJob Spec

		1).spec.schedule:			调度, 必需字段, 指定任务运行周期, 格式同Cron
		2)·spec.jobTemplate:		Job模板, 必需字段, 指定需要运行的任务, 格式同job.spec.starting Deadline Seconds:启动Job的期限(秒级别) , 该字段是可选的。如果因为任何原因而错过了被调度的时间, 那么错过执行时间的Job将被认为是失败的。如果没有指定, 则没有期限
		3).spec.concurrencyPolicy:并发策略, 该字段也是可选的。它指定了如何处理被CronJob创建的Job的并发执行。只允许指定下面策略中的一种:
			Allow(默认) :允许并发运行Jobo
			Forbid:禁止并发运行, 如果前一个还没有完成, 则直接跳过下一个
			Replace:取消当前正在运行的Job, 用一个新的来替换
		注意:当前策略只能应用于同一个CronJob创建的Job.如果存在多个CronJob, 它们创建的Job之间总是允许井发运行。
		4).spec.suspend:			挂起, 该字段也是可选的。如果设置为true, 后续所有执行都会被挂起。它对已经开始执行的Job不起作用。默认值为false.
		5).spec.successfullobsHistoryLimit和.spec.failedJobsHistoryLimit:历史限制,是可选的字段。它们指定了可以保留多少完成和失败的job.默认情况下, 它们分别设置为3和1.设置限制的值为8, 相关类型的Job完成后将不会被保留。
		6)crondjob本身的一些限制
			创建job操作应该是幂等的
		[root@k8s-master1 ~]# vim cronjob.yaml 
			apiVersion: batch/v1beta1
			kind: CronJob
			metadata:
			  name: hello
			spec:
			  schedule: "*/1 * * * *"
			  jobTemplate:
			    spec:
			     template:
			       spec:
			         containers:
			         - name: pi
			           image: perl
			           imagePullPolicy: IfNotPresent
			           args:
			           - /bin/sh
			           - -c
			           - date; echo hello from the kubernetes cluster
			         restartPolicy: OnFailure
		[root@k8s-master1 ~]# kubectl create -f cronjob.yaml 
		[root@k8s-master1 ~]# kubectl get job
			NAME               COMPLETIONS   DURATION   AGE
			hello-1588505460   0/1           1s         1s
		[root@k8s-master1 ~]# kubectl get cronjob
			NAME    SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
			hello   */1 * * * *   False     1        11s             64s
		[root@k8s-master1 ~]# kubectl get pod
			NAME                     READY   STATUS      RESTARTS   AGE
			hello-1588505460-78bvj   0/1     Completed   0          81s
			hello-1588505520-22dkf   0/1     Completed   0          21s
		[root@k8s-master1 ~]# kubectl log hello-1588505460-78bvj
			log is DEPRECATED and will be removed in a future version. Use logs instead.
			Sun May  3 11:31:03 UTC 2020
			hello from the kubernetes cluster













你可能感兴趣的:(K8S,docker,多节点部署k8s:资源控制器,kubernetes,docker)