Kubernetes之kubectl常用命令:故障排查和故障解决

kubectl故障排查相关常用命令

编号 命令 说明
1 version 显示客户端和服务器侧版本信息
2 api-versions 以group/version的格式显示服务器侧所支持的API版本
3 explain 显示资源文档信息
4 get 取得确认对象信息列表
5 describe 取得确认对象的详细信息
6 logs 取得pod中容器的log信息
7 exec 在容器中执行一条命令
8 cp 从容器考出或向容器考入文件
9 attach Attach到一个运行中的容器上

kubectl version

version命令用于确认客户端和服务器侧的版本信息,不同的版本的情况变化可能很大,所以故障排除时首先也需要确认的是现场环境的版本信息。从下面可以清楚地看到,本文验证时所使用的版本为1.11.2

[root@master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

kubectl api-versions

使用api-versions命令可以列出当前版本的kubernetes的服务器端所支持的api版本信息。

[root@master ~]# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1

kubectl explain

使用kubectl explain可以和kubectl help一样进行辅助的功能确认,使用它可以了解各个部分的说明和组成部分。比如如下可以看到对rc的说明,在故障排除时作用并不具有太大作用,到是可以多读读加深一下对各个部分的理解。

[root@master ~]# kubectl explain rc
DESCRIPTION:
ReplicationController represents the configuration of a replication controller.

FIELDS:
   apiVersion   
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources

   kind 
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds

   metadata 
     If the Labels of a ReplicationController are empty, they are defaulted to
     be the same as the Pod(s) that the replication controller manages. Standard
     object's metadata. More info:
     http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata

   spec 
     Spec defines the specification of the desired behavior of the replication
     controller. More info:
     http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status

   status   
     Status is the most recently observed status of the replication controller.
     This data may be out of date by some window of time. Populated by the
     system. Read-only. More info:
     http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status 
  

explain命令能够确认的信息类别

其所能支持的类别如下:

类别
clusters (仅对federation apiservers有效)
componentstatuses (缩写 cs)
configmaps (缩写 cm)
daemonsets (缩写 ds)
deployments (缩写 deploy)
endpoints (缩写 ep)
events (缩写 ev)
horizontalpodautoscalers (缩写 hpa)
ingresses (缩写 ing)
jobs
limitranges (缩写 limits)
namespaces (缩写 ns)
networkpolicies
nodes (缩写 no)
persistentvolumeclaims (缩写 pvc)
persistentvolumes (缩写 pv)
pods (缩写 po)
podsecuritypolicies (缩写 psp)
podtemplates
replicasets (缩写 rs)
replicationcontrollers (缩写 rc)
resourcequotas (缩写 quota)
secrets
serviceaccounts (缩写 sa)
services (缩写 svc)
statefulsets
storageclasses
thirdpartyresources

kubectl get

使用get命令确认所创建出来的pod和deployment的信息

确认pod

可以看到创建出来的pod的所有信息,也可以使用Kubectl get po进行确认

[root@master ~]# kubectl get pods

确认deployment

可以看到创建出来的deployment的所有信息

[root@master ~]# kubectl get deployment

如果希望得到更加详细一点的信息,可以加上-o wide参数,比如对pods可以看到此pod在哪个node上运行,此pod的集群IP是多少也被一并显示了

[root@master ~]# kubectl get pods -o wide

确认node信息

显示node的信息

[root@master ~]# kubectl get nodes -o wide

确认namespace信息

列出所有的namespace

[root@master ~]# kubectl get namespaces

get命令能够确认的信息类别

使用node/pod/event/namespaces等结合起来,能够获取集群基本信息和状况, 其所能支持的类别如下:

类别
clusters (仅对federation apiservers有效)
componentstatuses (缩写 cs)
configmaps (缩写 cm)
daemonsets (缩写 ds)
deployments (缩写 deploy)
endpoints (缩写 ep)
events (缩写 ev)
horizontalpodautoscalers (缩写 hpa)
ingresses (缩写 ing)
jobs
limitranges (缩写 limits)
namespaces (缩写 ns)
networkpolicies
nodes (缩写 no)
persistentvolumeclaims (缩写 pvc)
persistentvolumes (缩写 pv)
pods (缩写 po)
podsecuritypolicies (缩写 psp)
podtemplates
replicasets (缩写 rs)
replicationcontrollers (缩写 rc)
resourcequotas (缩写 quota)
secrets
serviceaccounts (缩写 sa)
services (缩写 svc)
statefulsets
storageclasses
thirdpartyresources

kubectl describe

确认node详细信息

一般使用get命令取得node信息,然后使用describe确认详细信息。

确认某一pod详细信息

[root@ku8-1 tmp]# kubectl describe pod mysql-478535978-1dnm2
Name:       mysql-478535978-1dnm2
Namespace:  default
Node:       192.168.32.133/192.168.32.133
Start Time: Thu, 29 Jun 2017 05:04:21 -0400
Labels:     name=mysql
        pod-template-hash=478535978
Status:     Running
IP:     172.200.44.2
Controllers:    ReplicaSet/mysql-478535978
Containers:
  mysql:
    Container ID:   docker://47ef1495e86f4b69414789e81081fa55b837dafe9e47944894e7cb3733700410
    Image:      192.168.32.131:5000/mysql:5.7.16
    Image ID:       docker-pullable://192.168.32.131:5000/mysql@sha256:410b279f6827492da7a355135e6e9125849f62eeca76429974a534f021852b58
    Port:       3306/TCP
    State:      Running
      Started:      Thu, 29 Jun 2017 05:04:22 -0400
    Ready:      True
    Restart Count:  0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dzs1w (ro)
    Environment Variables:
      MYSQL_ROOT_PASSWORD:  hello123
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  default-token-dzs1w:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-dzs1w
QoS Class:  BestEffort
Tolerations:    
No events.
[root@ku8-1 tmp]# 

确认deployment详细信息

确认某一deployment的详细信息

[root@ku8-1 tmp]# kubectl get deployment
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mysql       1         1         1            1           1h
sonarqube   1         1         1            1           1h
[root@ku8-1 tmp]# kubectl describe deployment mysql
Name:           mysql
Namespace:      default
CreationTimestamp:  Thu, 29 Jun 2017 05:04:21 -0400
Labels:         name=mysql
Selector:       name=mysql
Replicas:       1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       RollingUpdate
MinReadySeconds:    0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Conditions:
  Type      Status  Reason
  ----      ------  ------
  Available     True    MinimumReplicasAvailable
OldReplicaSets: 
NewReplicaSet:  mysql-478535978 (1/1 replicas created)
No events.
[root@ku8-1 tmp]# 

describe命令能够确认的信息

describe命令所能支持的类别如下:

类别
clusters (仅对federation apiservers有效)
componentstatuses (缩写 cs)
configmaps (缩写 cm)
daemonsets (缩写 ds)
deployments (缩写 deploy)
endpoints (缩写 ep)
events (缩写 ev)
horizontalpodautoscalers (缩写 hpa)
ingresses (缩写 ing)
jobs
limitranges (缩写 limits)
namespaces (缩写 ns)
networkpolicies
nodes (缩写 no)
persistentvolumeclaims (缩写 pvc)
persistentvolumes (缩写 pv)
pods (缩写 po)
podsecuritypolicies (缩写 psp)
podtemplates
replicasets (缩写 rs)
replicationcontrollers (缩写 rc)
resourcequotas (缩写 quota)
secrets
serviceaccounts (缩写 sa)
services (缩写 svc)
statefulsets
storageclasses
thirdpartyresources

kubectl logs

类似于docker logs,使用kubectl logs能够取出pod中镜像的log,也是故障排除时候的重要信息

[root@ku8-1 tmp]# kubectl logs mysql-478535978-1dnm2
Initializing database
...
2017-06-29T09:04:37.081939Z 0 [Note] Event Scheduler: Loaded 0 events
2017-06-29T09:04:37.082097Z 0 [Note] mysqld: ready for connections.
Version: '5.7.16'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)

kubectl exec

exec命令用于到容器中执行一条命令,比如下述命令用于到mysql的镜像中执行hostname命令

[root@ku8-1 tmp]# kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mysql-478535978-1dnm2        1/1       Running   0          1h
sonarqube-3574384362-m7mdq   1/1       Running   0          1h
[root@ku8-1 tmp]# kubectl exec mysql-478535978-1dnm2 hostname
mysql-478535978-1dnm2
[root@ku8-1 tmp]# 

更为常用的方式则是登陆到pod中,在有条件的时候,进行故障发生时的现场确认,这种方式是最为直接有效和快速,但是对权限要求也较多。

[root@ku8-1 tmp]# kubectl exec -it mysql-478535978-1dnm2 sh
# hostname
mysql-478535978-1dnm2
# 

kubectl cp

用于pod和外部的文件交换,比如如下示例了如何在进行内外文件交换。

在pod中创建一个文件message.log

[root@ku8-1 tmp]# kubectl exec -it mysql-478535978-1dnm2 sh
# pwd
/
# cd /tmp
# echo "this is a message from `hostname`" >message.log
# cat message.log
this is a message from mysql-478535978-1dnm2
# exit
[root@ku8-1 tmp]#

拷贝出来并确认

[root@ku8-1 tmp]# kubectl cp mysql-478535978-1dnm2:/tmp/message.log message.log
tar: Removing leading `/' from member names
[root@ku8-1 tmp]# cat message.log
this is a message from mysql-478535978-1dnm2
[root@ku8-1 tmp]#

更改message.log并拷贝回pod

[root@ku8-1 tmp]# echo "information added in `hostname`" >>message.log 
[root@ku8-1 tmp]# cat message.log 
this is a message from mysql-478535978-1dnm2
information added in ku8-1
[root@ku8-1 tmp]# kubectl cp message.log mysql-478535978-1dnm2:/tmp/message.log
[root@ku8-1 tmp]# 

确认更改后的信息

[root@ku8-1 tmp]# kubectl exec mysql-478535978-1dnm2 cat /tmp/message.log
this is a message from mysql-478535978-1dnm2
information added in ku8-1
[root@ku8-1 tmp]#

kubectl attach

类似于docker attach的功能,用于取得实时的类似于kubectl logs的信息

[root@ku8-1 tmp]# kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mysql-478535978-1dnm2        1/1       Running   0          1h
sonarqube-3574384362-m7mdq   1/1       Running   0          1h
[root@ku8-1 tmp]# kubectl attach sonarqube-3574384362-m7mdq
If you don't see a command prompt, try pressing enter.

kubectl cluster-info

使用cluster-info和cluster-info dump也能取出一些信息,尤其是你需要看整体的全部信息的时候一条命令一条命令的执行不如kubectl cluster-info dump来的快一些

kubectl故障解决相关常用命令

编号 命令 说明
1 edit 编辑服务器侧资源
2 replace 使用文件名或者标准输入资源
3 patch 部分更新资源相关信息
4 apply 使用文件或者标准输入更改配置信息
5 scale 重新设定Deployment/ReplicaSet/RC/Job的size
6 autoscale Deployment/ReplicaSet/RC的自动扩展设定
7 cordon 设定node不可使用
8 uncordon 设定node可以使用
9 drain 设定node进入维护模式

kubectl edit

edit这条命令用于编辑服务器上的资源,具体是什么意思,可以通过如下使用方式来确认。

编辑对象确认

使用-o参数指定输出格式为yaml的nginx的service的设定情况确认,取得现场情况,这也是我们不知道其yaml文件而只有环境时候能做的事情。

[root@ku8-1 tmp]# kubectl get service |grep nginx
nginx        172.200.229.212          80:31001/TCP   2m
[root@ku8-1 tmp]# kubectl get service nginx -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-06-30T04:50:44Z
  labels:
    name: nginx
  name: nginx
  namespace: default
  resourceVersion: "77068"
  selfLink: /api/v1/namespaces/default/services/nginx
  uid: ad45612a-5d4f-11e7-91ef-000c2933b773
spec:
  clusterIP: 172.200.229.212
  ports:
  - nodePort: 31001
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: nginx
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
[root@ku8-1 tmp]# 

使用edit命令对nginx的service设定进行编辑,得到如下信息

可以看到当前端口为31001,在此编辑中,我们把它修改为31002

[root@ku8-1 tmp]# kubectl edit service nginx
service "nginx" edited
[root@ku8-1 tmp]#

编辑之后确认结果发现,此服务端口已经改变

[root@ku8-1 tmp]# kubectl get service
NAME         CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGE
kubernetes   172.200.0.1               443/TCP        1d
nginx        172.200.229.212          80:31002/TCP   8m
[root@ku8-1 tmp]# 

 

所使用场景之一,edit编辑的是运行环境的设定而不需要停止服务。

kubectl replace

了解到edit用来做什么之后,我们会立即知道replace就是替换,我们使用上个例子中的service的port,重新把它改回31001

事前确认

确认port信息为31002

[root@ku8-1 tmp]# kubectl get service
NAME         CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGE
kubernetes   172.200.0.1               443/TCP        1d
nginx        172.200.229.212          80:31002/TCP   17m
[root@ku8-1 tmp]# 

取得当前的nginx的service的设定文件,然后修改port信息

[root@ku8-1 tmp]# kubectl get service nginx -o yaml >nginx_forreplace.yaml
[root@ku8-1 tmp]# cp -p nginx_forreplace.yaml nginx_forreplace.yaml.org
[root@ku8-1 tmp]# vi nginx_forreplace.yaml
[root@ku8-1 tmp]# diff nginx_forreplace.yaml nginx_forreplace.yaml.org
15c15
<   - nodePort: 31001
---
>   - nodePort: 31002
[root@ku8-1 tmp]# 

执行replace命令

提示被替换了

[root@ku8-1 tmp]# kubectl replace -f nginx_forreplace.yaml
service "nginx" replaced
[root@ku8-1 tmp]#

确认结果

确认之后发现port确实重新变成了31001

[root@ku8-1 tmp]# kubectl get service
NAME         CLUSTER-IP        EXTERNAL-IP   PORT(S)        AGE
kubernetes   172.200.0.1               443/TCP        1d
nginx        172.200.229.212          80:31001/TCP   20m
[root@ku8-1 tmp]#

kubectl patch

当部分修改一些设定的时候patch非常有用,尤其是在1.2之前的版本,port改来改去好无聊,这次换个image

事前确认

当前port中使用的nginx是alpine的1.12版本

[root@ku8-1 tmp]# kubectl exec nginx-2476590065-1vtsp  -it sh
/ # nginx -v
nginx version: nginx/1.12.0
/ # 

执行patch进行替换

[root@ku8-1 tmp]# kubectl patch pod nginx-2476590065-1vtsp -p '{"spec":{"containers":[{"name":"nginx","image":"192.168.32.131:5000/nginx:1.13-alpine"}]}}'
"nginx-2476590065-1vtsp" patched
[root@ku8-1 tmp]# 

确认结果

确认当前pod中的镜像已经patch成了1.13

[root@ku8-1 tmp]# kubectl exec nginx-2476590065-1vtsp  -it sh
/ # nginx -v
nginx version: nginx/1.13.1
/ # 

kubectl apply

同样apply命令是用来使用文件或者标准输入来更改配置信息。

事前准备

[root@ku8-1 tmp]# kubectl delete -f nginx/
deployment "nginx" deleted
service "nginx" deleted
[root@ku8-1 tmp]# kubectl create -f nginx/
deployment "nginx" created
service "nginx" created
[root@ku8-1 tmp]# 

结果确认

Service的Port设定为了31001

[root@ku8-1 tmp]# kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   172.200.0.1              443/TCP        1d
nginx        172.200.68.154          80:31001/TCP   11s
[root@ku8-1 tmp]# 

修改设定文件

将port修改为31002

[root@ku8-1 tmp]# vi nginx/nginx.yaml 
[root@ku8-1 tmp]# grep 31002 nginx/nginx.yaml 
    nodePort: 31002
[root@ku8-1 tmp]# 

执行apply命令

执行设定文件可以在运行状态修改port信息

[root@ku8-1 tmp]# kubectl apply -f nginx/nginx.yaml 
deployment "nginx" configured
service "nginx" configured
[root@ku8-1 tmp]# 

结果确认

确认确实将port已经修改为31002了

[root@ku8-1 tmp]# kubectl get service
NAME         CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   172.200.0.1              443/TCP        1d
nginx        172.200.68.154          80:31002/TCP   1m
[root@ku8-1 tmp]#

kubectl scale

scale命令用于横向扩展,是kubernetes或者swarm这类容器编辑平台的重要功能之一,让我们来看看是如何使用的

事前准备

事前设定nginx的replica为一,而经过确认此pod在192.168.32.132上运行

[root@ku8-1 tmp]# kubectl delete -f nginx/
deployment "nginx" deleted
service "nginx" deleted
[root@ku8-1 tmp]# kubectl create -f nginx/
deployment "nginx" created
service "nginx" created
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-74tpk   1/1       Running   0          17s       172.200.26.2   192.168.32.132
[root@ku8-1 tmp]# kubectl get deployments -o wide
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     1         1         1            1           27s
[root@ku8-1 tmp]#

执行scale命令

使用scale命令进行横向扩展,将原本为1的副本,提高到3。

[root@ku8-1 tmp]# kubectl scale --current-replicas=1 --replicas=3 deployment/nginx
deployment "nginx" scaled
[root@ku8-1 tmp]# 

通过确认发现已经进行了横向扩展,除了192.168.132.132,另外133和134两台机器也各有一个pod运行了起来,这正是scale命令的结果。

[root@ku8-1 tmp]# kubectl get deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     3         3         3            3           2m
[root@ku8-1 tmp]# kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-74tpk   1/1       Running   0          2m        172.200.26.2   192.168.32.132
nginx-2476590065-cm5d9   1/1       Running   0          16s       172.200.44.2   192.168.32.133
nginx-2476590065-hmn9j   1/1       Running   0          16s       172.200.59.2   192.168.32.134
[root@ku8-1 tmp]#

kube autoscale

autoscale命令用于自动扩展确认,跟scale不同的是前者还是需要手动执行,而autoscale则会根据负载进行调解。而这条命令则可以对Deployment/ReplicaSet/RC进行设定,通过最小值和最大值的指定进行设定,这里只是给出执行的结果,不再进行实际的验证。

[root@ku8-1 tmp]# kubectl autoscale deployment nginx --min=2 --max=5
deployment "nginx" autoscaled
[root@ku8-1 tmp]# 

当然使用还会有一些限制,比如当前3个,设定最小值为2的话会出现什么样的情况?

[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-74tpk   1/1       Running   0          5m        172.200.26.2   192.168.32.132
nginx-2476590065-cm5d9   1/1       Running   0          2m        172.200.44.2   192.168.32.133
nginx-2476590065-hmn9j   1/1       Running   0          2m        172.200.59.2   192.168.32.134
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl autoscale deployment nginx --min=2 --max=2
Error from server (AlreadyExists): horizontalpodautoscalers.autoscaling "nginx" already exists
[root@ku8-1 tmp]# 

kubectl cordon 与 uncordon

在实际维护的时候会出现某个node坏掉,或者做一些处理,暂时不能让生成的pod在此node上运行,需要通知kubernetes让其不要创建过来,这条命令就是cordon,uncordon则是取消这个要求。例子如下:

事前准备

创建了一个nginx的pod,跑在192.168.32.133上。

[root@ku8-1 tmp]# kubectl create -f nginx/
deployment "nginx" created
service "nginx" created
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-dnsmw   1/1       Running   0          6s        172.200.44.2   
192.168.32.133
[root@ku8-1 tmp]#

执行scale命令

横向扩展到3个副本,发现利用roundrobin策略每个node上运行起来了一个pod,134这台机器也有一个。

[root@ku8-1 tmp]# kubectl scale --replicas=3 deployment/nginx
deployment "nginx" scaled
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-550sm   1/1       Running   0          5s        172.200.26.2   192.168.32.132
nginx-2476590065-bt3bc   1/1       Running   0          5s        172.200.59.2   192.168.32.134
nginx-2476590065-dnsmw   1/1       Running   0          17s       172.200.44.2   192.168.32.133
[root@ku8-1 tmp]# kubectl get pods -o wide |grep 192.168.32.134
nginx-2476590065-bt3bc   1/1       Running   0          12s       172.200.59.2   192.168.32.134
[root@ku8-1 tmp]# 

执行cordon命令

设定134,使得134不可使用,使用get node确认,其状态显示SchedulingDisabled。

[root@ku8-1 tmp]# kubectl cordon 192.168.32.134
node "192.168.32.134" cordoned
[root@ku8-1 tmp]# kubectl get nodes -o wide
NAME             STATUS                     AGE       EXTERNAL-IP
192.168.32.132   Ready                      1d        
192.168.32.133   Ready                      1d        
192.168.32.134   Ready,SchedulingDisabled   1d        
[root@ku8-1 tmp]# 

执行scale命令

再次执行横向扩展命令,看是否会有pod漂到134这台机器上,结果发现只有之前的一个pod,再没有新的pod漂过去。

[root@ku8-1 tmp]# kubectl scale --replicas=6 deployment/nginx
deployment "nginx" scaled
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-550sm   1/1       Running   0          32s       172.200.26.2   192.168.32.132
nginx-2476590065-7vxvx   1/1       Running   0          3s        172.200.44.3   192.168.32.133
nginx-2476590065-bt3bc   1/1       Running   0          32s       172.200.59.2   192.168.32.134
nginx-2476590065-dnsmw   1/1       Running   0          44s       172.200.44.2   192.168.32.133
nginx-2476590065-fclhj   1/1       Running   0          3s        172.200.44.4   192.168.32.133
nginx-2476590065-fl9fn   1/1       Running   0          3s        172.200.26.3   192.168.32.132
[root@ku8-1 tmp]# kubectl get pods -o wide |grep 192.168.32.134
nginx-2476590065-bt3bc   1/1       Running   0          37s       172.200.59.2   192.168.32.134
[root@ku8-1 tmp]# 

执行uncordon命令

使用uncordon命令解除对134机器的限制,通过get node确认状态也已经正常。

[root@ku8-1 tmp]# kubectl uncordon 192.168.32.134
node "192.168.32.134" uncordoned
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl get nodes -o wide
NAME             STATUS    AGE       EXTERNAL-IP
192.168.32.132   Ready     1d        
192.168.32.133   Ready     1d        
192.168.32.134   Ready     1d        
[root@ku8-1 tmp]# 

执行scale命令

再次执行scale命令,发现有新的pod可以创建到134node上了。

[root@ku8-1 tmp]# kubectl scale --replicas=10 deployment/nginx
deployment "nginx" scaled
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-550sm   1/1       Running   0          1m        172.200.26.2   192.168.32.132
nginx-2476590065-7vn6z   1/1       Running   0          3s        172.200.44.4   192.168.32.133
nginx-2476590065-7vxvx   1/1       Running   0          35s       172.200.44.3   192.168.32.133
nginx-2476590065-bt3bc   1/1       Running   0          1m        172.200.59.2   192.168.32.134
nginx-2476590065-dnsmw   1/1       Running   0          1m        172.200.44.2   192.168.32.133
nginx-2476590065-fl9fn   1/1       Running   0          35s       172.200.26.3   192.168.32.132
nginx-2476590065-pdx91   1/1       Running   0          3s        172.200.59.3   192.168.32.134
nginx-2476590065-swvwf   1/1       Running   0          3s        172.200.26.5   192.168.32.132
nginx-2476590065-vdq2k   1/1       Running   0          3s        172.200.26.4   192.168.32.132
nginx-2476590065-wdv52   1/1       Running   0          3s        172.200.59.4   192.168.32.134
[root@ku8-1 tmp]#

kubectl drain

drain命令用于对某个node进行设定,是为了设定此node为维护做准备。英文的drain有排干水的意思,下水道的水之后排干后才能进行维护。那我们来看一下kubectl”排水”的时候都作了什么

事前准备

将nginx的副本设定为4,确认发现134上启动了两个pod。

[root@ku8-1 tmp]# kubectl create -f nginx/
deployment "nginx" created
service "nginx" created
[root@ku8-1 tmp]# kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-d6h8f   1/1       Running   0          8s        172.200.59.2   192.168.32.134
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl get nodes -o wide
NAME             STATUS    AGE       EXTERNAL-IP
192.168.32.132   Ready     1d        
192.168.32.133   Ready     1d        
192.168.32.134   Ready     1d        
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl scale --replicas=4 deployment/nginx
deployment "nginx" scaled
[root@ku8-1 tmp]# 
[root@ku8-1 tmp]# kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-2476590065-9lfzh   1/1       Running   0          12s       172.200.59.3   192.168.32.134
nginx-2476590065-d6h8f   1/1       Running   0          1m        172.200.59.2   192.168.32.134
nginx-2476590065-v8xvf   1/1       Running   0          43s       172.200.26.2   192.168.32.132
nginx-2476590065-z94cq   1/1       Running   0          12s       172.200.44.2   192.168.32.133
[root@ku8-1 tmp]# 

执行drain命令

执行drain命令,发现这条命令做了两件事情:
1. 设定此node不可以使用(cordon)
2. evict了其上的两个pod

[root@ku8-1 tmp]# kubectl drain 192.168.32.134
node "192.168.32.134" cordoned
pod "nginx-2476590065-d6h8f" evicted
pod "nginx-2476590065-9lfzh" evicted
node "192.168.32.134" drained
[root@ku8-1 tmp]# 

结果确认

evict的意思有驱逐和回收的意思,让我们来看一下evcit这个动作的结果到底是什么。
结果是134上面已经不再有pod,而在132和133上新生成了两个pod,用以替代在134上被退场的pod,而这个替代的动作应该是replicas的机制保证的。所以drain的结果就是退场pod和设定node不可用(排水),这样的状态则可以进行维护了,执行完后重新uncordon即可。

你可能感兴趣的:(Kubernetes)