#第五章: kubernetes-05 运行应用 ## 5.1 Deployment ### 5.1.1 运行Deployment ~~~ kubectl create namespace k8test # 创建namespace #kubectl run nginx-deployment -n k8test --image=nginx:1.7.9 --replicas=2 # 运行deployment,不可用了 kubectl run --generator=run-pod/v1 nginx-deployment -n k8test --image=nginx:1.7.9 --replicas=2 kubectl get pods -n k8test kubectl get deployment -n k8test # 查看deployment kubectl describe deployment -n k8test # 查看deployment更详细的信息 kubectl get replicaset -n k8test # 查看副本控制器 kubectl describe replicaset -n k8test # 副本控制器详细信息 # kubectl get replicaset -n k8test NAME DESIRED CURRENT READY AGE nginx-deployment-748ff87d9d 2 2 2 8m28s ~~~ **kubectl describe** ~~~ $kubectl describe deployment nginx-deployment -n k8test $kubectl describe replicaset nginx-deployment-748ff87d9d -n k8test Name: nginx-deployment-748ff87d9d Namespace: k8test Selector: pod-template-hash=748ff87d9d,run=nginx-deployment Labels: pod-template-hash=748ff87d9d run=nginx-deployment Annotations: deployment.kubernetes.io/desired-replicas: 2 deployment.kubernetes.io/max-replicas: 3 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/nginx-deployment Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: pod-template-hash=748ff87d9d run=nginx-deployment Containers: nginx-deployment: Image: nginx:1.7.9 Port: Host Port: Environment: Mounts: Volumes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 11m replicaset-controller Created pod: nginx-deployment-748ff87d9d-vpfwg Normal SuccessfulCreate 11m replicaset-controller Created pod: nginx-deployment-748ff87d9d-fhknq ~~~ 总结一下运行deployment的过程: A:用户通过kubectl创建deployment。 B:Deployment创建ReplicaSet。 C:ReplicaSet创建Pod。 ### 5.1.2 命令 VS 配置文件 K8s两种创建资源的方式: (1)用kubectl命令的方式直接创建:比如前面的创建deployment (2)通过配置文件和kubectl apply创建, kubectl apply -f nignx.yml **编写app yaml文件** ``` vim nginx-deployment.yaml apiVersion: extensions/v1beta1 # 配置格式的版本 kind: Deployment # 创建的资源类型,这里是deployment metadata: # 元数据 name: nginx-deployment # name是必须的元数据 spec: # spec是Deployment的规格说明 replicas: 2 # 副本数量 template: # Pod的模板 metadata: # Pod的元数据,至少要定义label labels: app: web_server # label的key和value可以随意指定 spec: # 描述Pod的规格,此部分定义Pod中每一个容器的属性,name和image是必须的 containers: - name: nginx image: nginx:1.7.9 ``` **布署app** ~~~ $kubectl apply -f nginx-deployment.yaml deployment.extensions/nginx-deployment created ~~~ **查看服务** ~~~ $ kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 11s kubectl get pods -n k8test NAME READY STATUS RESTARTS AGE nginx-deployment 1/1 Running 0 25m ~~~ ### 5.1.3 Deployment配置文件简介 上面文件中介绍了各个字段的含义。 如何删除这些资源: ~~~ kubectl delete deployment nginx-deployment # 删除之后deployment 和 pods就都没有了。 $ kubectl delete -f nginx.yml #这样也可以 ~~~ ### 5.1.4 伸缩(在线增加或减少Pod的副本数) 可以改replicas的数来更改Pod副本数量。默认不会将Pod调度到Master节点。当然,通过命令也可以将Master节点也当做Node来用。 如何使得Deployment的改动生效: 1)改动nginx.yml文件后, 直接运行 kubeclt apply -f nginx.yml 就可以生效。 2) kubectl edit deployment deployment_XXXX, 改动后,保存,即刻生效。 ~~~ vim nginx-deployment.yaml #修改replicas: xx kubectl apply -f nginx-deployment.yaml ubectl get pods -o wide ~~~ ### 5.1.5 Failover 当有一个Node故障时,k8s会检测到,并且Pod状态会变成Unknown, 其他状态良好的Node上回创建新Pod来保证Pod副本的数量 ### 5.1.6 用label控制Pod的位置 默认,Scheduler会将Pod调度到所有可用的Node。但是有时希望将Pod部署到指定的Node,比如将有大量磁盘IO的Pod部署到配置了SSD的Node;或者Pod需要GPU,需要运行在配置了GPU的节点上。 **K8s通过label来实现这个功能** label是key-value对,各种资源都可以设置label,灵活添加各种自定义属性。比如执行如下命令标注node k8s-node-122132073 是配置了SSD的节点。 $\color{red}{label操作}$ ~~~ #添加label $kubectl label node k8s-node-122132073 disktype=ssd node/k8s-node-122132073 labeled $kubectl get node --show-labels #删除 $kubectl label node k8s-node-122132073 disktype- $kubectl get node --show-labels ~~~ 编辑nginx-deployment.yaml ~~~ apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 4 template: metadata: labels: app: web_server spec: containers: - name: nginx-dp image: nginx:1.7.9 ports: - containerPort: 80 nodeSelector: disktype: ssd $kubectl get pod -o wide ~~~ 在Pod模板的spec里通过nodeSelector指定将此Pod部署到具有label具有disktype=ssd的node上。 然后查看pod运行情况:发现Pod都运行在指定的node上了。 ### 5.2 DaemonSet Deployment部署的副本Pod会分布在各个Node上,每个node都可能运行好几个副本。 DaemonSet的不同之处是: 每个Node上最多只能运行一个副本。其典型应用场景: (1)在集群的每个节点上运行存储Daemon,比如glusterd或ceph (2)在每个节点上运行日志收集Deamon,比如flunentd或 logstash。 (3)在每个节点上运行监控Deamon,比如Prometheus Node Exporter或collectd. ~~~ kubectl get daemonset -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-flannel-ds-amd64 3 3 3 3 3 beta.kubernetes.io/arch=amd64 7d3h kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 7d3h kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 7d3h kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 7d3h kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 7d3h kube-proxy 3 3 3 3 3 beta.kubernetes.io/os=linux 7d3h ~~~ ### 5.2.1 kube-flannel-ds kubectl edit daemonSet kube-flannel-ds-amd64 --namespace=kube-system ### 5.2.2 kube-proxy $ kubectl edit daemonSet kube-proxy --namespace=kube-system ### 5.2.3 运行自己的daemonSet ~~~ vim node_exporter.yml kubectl apply -f node_exporter.yml ~~~ ## 5.3 Job Job适于: (1)一次性任务,比如批处理,运行完就销毁 (2)定期执行的任务, 比如Cronjob ~~~ vim myjob.yml apiVersion: batch/v1 # Job的apiVersion kind: Job metadata: name: myjob spec: template: metadata: name: myjob spec: containers: - name: hello image: busybox command: ["echo", "hell k8s job! "] restartPolicy: Never # 在什么情况下会重启容器,对于job可以是Never or OnFailure。 对于controller(deployment)可以是Always $ kubectl apply -f myjob.yml job.batch/myjob created $kubectl get job NAME COMPLETIONS DURATION AGE myjob 1/1 10s 11s ~~~ **查看信息** ~~~ $kubectl get job $kubectl get pods myjob-tscmv 0/1 Completed 0 6m48s $kubectl logs -f myjob-tscmv hell k8s job! ~~~ ### 5.3.1 Pod失败的情况 ~~~ kubectl delete -f myjob.yml vim myjob.yml restartPolicy: OnFailure $kubectl apply -f myjob.yml $kubectl get pods NAME READY STATUS RESTARTS AGE myjob-9dwmw 0/1 CrashLoopBackOff 2 46s ~~~ ###5.3.2 Job的并行性 同时运行多个job,提高Job的执行效率。这个可以通过parallelism、completions设置。 ###5.3.3 定时Job Cronjob是K8s提供的定时任务。 ~~~ vim cronjob.yml apiVersion: batch/v2alpha1 kind: CronJob metadata: name: hello spec: schedule: "*/1 * * * *" #每分钟启动一次 jobTemplate: #job的模板 spec: template: spec: containers: - name: hello image: busybox command: ["echo", "hell k8s cron job! "] restartPolicy: OnFailure $kubectl apply -f cronjob.yml error: unable to recognize "cronjob.yml": no matches for kind "CronJob" in version "batch/v2alpha1" #创建任务失败。因为K8s默认没有enable cronjob #修改配置 $vim /etc/kubernetes/manifests/kube-apiserver.yaml spec: containers: - command: - kube-apiserver - --runtime-config=batch/v2alpha1=true $systemctl restart kubelet.service # 查看是否生效 $ kubectl api-versions|grep v2 autoscaling/v2beta1 autoscaling/v2beta2 batch/v2alpha1 再次创建cronjob $ kubectl apply -f cronjob.yml cronjob.batch/hello created $kubectl get jobs NAME COMPLETIONS DURATION AGE hello-1569410460 1/1 4s 57s $kubectl get pods hello-1569411420-lkzxm 0/1 Completed 0 2m30s hello-1569411480-m5zvd 0/1 Completed 0 90s hello-1569411540-rr4ws 0/1 Completed 0 30s #停止服务 $kubectl delete -f cronjob.yml ~~~ **停止服务** $kubectl delete -f cronjob.yml ### 参考 - [【目录】每天5分钟,玩转kubernetes](https://www.jianshu.com/p/aeef7a4f121c) - [k8s deploy](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/) - [https://kubernetes.io/docs/tutorials/hello-minikube/](https://kubernetes.io/docs/tutorials/hello-minikube/) - [https://kubernetes.io/zh/docs/tutorials/](https://kubernetes.io/zh/docs/tutorials/)