在 kubeadm安装集群K8s:v1.13.2 - 的基础上,接下来我们在该容器上运行一个最简单的nginx容器,并观察Kubernetes是如何调度容器的:
Nginx ReplicationController配置文件如下:
[root@localhost ~]# cat mynginx-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-test
spec:
replicas: 2
selector:
app: nginx-test
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- name: nginx-test
image: docker.io/nginx
ports:
- containerPort: 80
Nginx Service配置文件如下:
[root@localhost ~]# cat mynginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-test
spec:
type: NodePort
ports:
- port: 80
nodePort: 30002
selector:
app: nginx-test
通过配置文件创建ReplicationController和service:
[root@localhost ~]# kubectl -f mynginx-rc.yaml
[root@localhost ~]# kubectl -f mynginx-svc.yaml
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-v7vhb 1/1 Running 0 3h
myweb-bgzvg 1/1 Running 0 178m
nginx-test-lwttj 1/1 Running 0 25s
nginx-test-z4cht 1/1 Running 0 25s
[root@k8s-master ~]# kubectl get rc
NAME DESIRED CURRENT READY AGE
mysql 1 1 1 3h1m
myweb 1 1 1 178m
nginx-test 2 2 2 33s
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 24h
mysql ClusterIP 10.96.95.56 3306/TCP 179m
myweb NodePort 10.101.46.172 8080:30001/TCP 176m
nginx-test NodePort 10.111.101.176 80:30002/TCP 28s
[root@k8s-master ~]# netstat -ntlp | grep 30002
tcp6 0 0 :::30002 :::* LISTEN 10462/kube-proxy
使用kubectl describe pod命令查看pod的详细信息,包括pod创建在哪个物理Node节点上:
[root@k8s-master ~]# kubectl describe pod nginx-test
//以下只显示部分信息:
Name: nginx-test-4fvtm
Namespace: default
Priority: 0
PriorityClassName:
Node: k8s-node1/192.168.1.130
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Name: nginx-test-9lt2g
Namespace: default
Priority: 0
PriorityClassName:
Node: k8s-node2/192.168.1.131
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
可以看到,创建的两个Pod分别被部署到两个Node物理节点上。不同的物理节点通过之前安装的CNI网络插件(如calico)进行相互访问:
接下来我们关闭物理节点Node2(k8s-node2/192.168.1.131,下同),观察Kubernetes是如何调度容器的。关闭节点后立刻查看Kubernetes Node状态,没有发生任何改变:
Master节点经过一段时间后发现Node2无法联系,置为NotReady状态。
[root@k8s-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 24h v1.13.2
k8s-node1 Ready 24h v1.13.2
k8s-node2 NotReady 24h v1.13.2
与此同时Pod2(nginx-test-9lt2g:运行在Node2节点的Pod,下同)STATUS仍处于Running状态,但Condition:Ready从True转变为False。
再间隔一段时间t1后,Pod2 STATUS从Running变为Terminating,同时新创建一个Pod容器运行在Node1节点。t1没有精确的测量,不知与哪个参数设置有关,怀疑与describe pod 中Toleration信息相关:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-test-4fvtm 1/1 Running 0 10m
nginx-test-5ksxh 1/1 Running 0 3m57s
nginx-test-9lt2g 1/1 Terminating 0 10m
随后我们开启Node2物理节点,他会自动向Master节点报备自己的信息,成功开启后Node2 STATUS重新变回Ready状态。Terminating状态的Pod不会立刻被清除,而是间隔一段时间t2后被自动清除。但新创建的Pod3(即nginx-test-5ksxh)不会被从Node1节点自动部署至Node2节点。
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-v7vhb 1/1 Running 0 4h16m
myweb-cnfbh 1/1 Running 0 65m
nginx-test-4fvtm 1/1 Running 0 71m
nginx-test-5ksxh 1/1 Running 0 65m
在Node1节点查看Docker产生的container:两个nginx服务都运行在Node1上
[root@k8s-node1 ~]# docker ps | grep nginx
0c81fc8808f2 docker.io/nginx@sha256:56bcd35e8433343dbae0484ed5b740843dd8bff9479400990f251c13bbb94763 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_nginx-test_nginx-test-5ksxh_default_53ae0dcb-27a1-11e9-8bd1-000c29d747fb_0
9a6caae7a4f3 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_nginx-test-5ksxh_default_53ae0dcb-27a1-11e9-8bd1-000c29d747fb_0
c690b1d16e11 docker.io/nginx@sha256:56bcd35e8433343dbae0484ed5b740843dd8bff9479400990f251c13bbb94763 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_nginx-test_nginx-test-4fvtm_default_624fa9c6-27a0-11e9-8bd1-000c29d747fb_0
0bc2479d8a70 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_nginx-test-4fvtm_default_624fa9c6-27a0-11e9-8bd1-000c29d747fb_0