k8s新增删除node节点

k8s新增删除node节点
1. 新增node节点

首先需要在节点所在服务器安装docker及k8s基础组件。
安装完毕后,在master节点查看token值。

## 查看token值
[root@master01 home]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
1ov8zv.vluxt3utekty0v68   19h         2022-02-16T10:13:03+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
y4416x.gwqezithbg8x08o7   20h         2022-02-16T10:52:22+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

如果返回结果为空,那么就需要在master节点上重新生成token并获取hash值:

# 获取具有时限的token的kubeadm join命令
[root@master01 home]# kubeadm token create --print-join-command
kubeadm join 10.12.7.1:6443 --token 31p6ct.zb25s79alds487bt --discovery-token-ca-cert-hash sha256:d5b62e22f47dfbff761ea5b9303b5451a2cd0a140bb0b1b76442d6f6b7b1482a
# 获取永久的token的kubeadm join命令
[root@master01 home]# kubeadm token create --ttl 0 --print-join-command
kubeadm join 10.12.7.1:6443 --token akcaed.drvxi53eq2qaq19d --discovery-token-ca-cert-hash sha256:d5b62e22f47dfbff761ea5b9303b5451a2cd0a140bb0b1b76442d6f6b7b1482a
# 查看区别为TTL时间
[root@master01 home]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
1ov8zv.vluxt3utekty0v68   19h         2022-02-16T10:13:03+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
31p6ct.zb25s79alds487bt   23h         2022-02-16T14:49:28+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
akcaed.drvxi53eq2qaq19d   <forever>   <never>   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

当存在token值的,可以直接获取token及对应的hash值拼接kubeadm join命令

# master节点
# 获取token值
[root@master01 home]# kubeadm token list | awk -F" " '{print $1}' |tail -n 1
y4416x.gwqezithbg8x08o7
# 获取hash值
[root@master01 home]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
d5b62e22f47dfbff761ea5b9303b5451a2cd0a140bb0b1b76442d6f6b7b1482a
# 在node节点服务器输入对应的命令
[root@bogon home]# kubeadm join master节点ip:6443 --token token值 --discovery-token-ca-cert-hash sha256:hash值
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

master节点上就能查看新节点是否加入,以及新加入节点的状态信息

# master节点
[root@master01 home]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   4h54m   v1.21.0
work01     Ready    <none>                 52m     v1.21.0
# 查看node节点的详细的列表
[root@master01 home]# kubectl get nodes -n kube-system -owide
NAME       STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
master01   Ready    control-plane,master   5h1m   v1.21.0   10.12.7.1     <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12
work01     Ready    <none>                 58m    v1.21.0   10.12.7.2     <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://20.10.12

master节点上查看pods信息

# master节点
[root@master01 home]# kubectl get pods -n kube-system -owide
NAME                                      READY   STATUS    RESTARTS   AGE     IP              NODE       NOMINATED NODE   READINESS GATES
calico-kube-controllers-958545d87-x8kvn   1/1     Running   0          5h33m   10.244.241.67   master01   <none>           <none>
calico-node-nzsw7                         1/1     Running   0          5h33m   10.12.7.1       master01   <none>           <none>
calico-node-t5q5m                         1/1     Running   0          93m     10.12.7.2       work01     <none>           <none>
coredns-545d6fc579-vlsxp                  1/1     Running   0          5h35m   10.244.241.65   master01   <none>           <none>
coredns-545d6fc579-x7drn                  1/1     Running   0          5h35m   10.244.241.66   master01   <none>           <none>
etcd-master01                             1/1     Running   0          5h36m   10.12.7.1       master01   <none>           <none>
kube-apiserver-master01                   1/1     Running   0          5h36m   10.12.7.1       master01   <none>           <none>
kube-controller-manager-master01          1/1     Running   0          5h36m   10.12.7.1       master01   <none>           <none>
kube-proxy-gng4w                          1/1     Running   0          93m     10.12.7.2       work01     <none>           <none>
kube-proxy-rgrjv                          1/1     Running   0          5h35m   10.12.7.1       master01   <none>           <none>
kube-scheduler-master01                   1/1     Running   0          5h36m   10.12.7.1       master01   <none>           <none>

部署ngixn验证

[root@master01 home]# kubectl get pods -owide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          19s   10.244.205.193   work01   <none>           <none>
2. 删除node节点【master节点操作】

获取node节点列表

[root@master01 home]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   5h46m   v1.21.0
work01     Ready    <none>                 103m    v1.21.0

查看待删除节点是否存在正在使用的资源

[root@master01 home]# kubectl get pods -A -owide | grep work01
default       nginx                                     1/1     Running   0          5m51s   10.244.205.193   work01     <none>           <none>
kube-system   calico-node-t5q5m                         1/1     Running   0          106m    10.12.7.2        work01     <none>           <none>
kube-system   kube-proxy-gng4w                          1/1     Running   0          106m    10.12.7.2        work01     <none>           <none>

node节点的上下线

# 将node下线之后,其上运行的pod依旧保存原样,只是不会创建新的pod在node节点上运行
[root@master01 home]# kubectl cordon work01
node/work01 cordoned
[root@master01 home]# kubectl get nodes
NAME       STATUS                     ROLES                  AGE    VERSION
master01   Ready                      control-plane,master   6h8m   v1.21.0
work01     Ready,SchedulingDisabled   <none>                 125m   v1.21.0
[root@master01 home]# kubectl get pods -A -owide | grep work01
default       nginx                                     1/1     Running   0          25m    10.244.205.193   work01     <none>           <none>
kube-system   calico-node-t5q5m                         1/1     Running   0          125m   10.12.7.2        work01     <none>           <none>
kube-system   kube-proxy-gng4w                          1/1     Running   0          125m   10.12.7.2        work01     <none>           <none>

# node上线
[root@master01 home]# kubectl uncordon work01
node/work01 uncordoned
[root@master01 home]# kubectl get nodes
NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   6h9m   v1.21.0
work01     Ready    <none>                 126m   v1.21.0

节点删除【master节点】

# 驱逐node节点,会移除同时移除node节点上的pod
[root@master01 home]# kubectl drain work01 --delete-local-data --force --ignore-daemonsets
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/work01 cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/nginx; ignoring DaemonSet-managed Pods: kube-system/calico-node-t5q5m, kube-system/kube-proxy-gng4w
evicting pod default/nginx
pod/nginx evicted
node/work01 evicted
[root@master01 home]# kubectl get nodes
NAME       STATUS                     ROLES                  AGE     VERSION
master01   Ready                      control-plane,master   6h13m   v1.21.0
work01     Ready,SchedulingDisabled   <none>                 131m    v1.21.0
[root@master01 home]#  kubectl get pods -owide | grep work01
No resources found in default namespace.

# 删除node节点
[root@master01 home]# kubectl delete node work01
node "work01" deleted
[root@master01 home]# kubectl get nodes
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   6h15m   v1.21.0

node节点

[root@bogon home]# kubeadm reset -f
[preflight] Running pre-flight checks
W0215 16:31:04.436589   46093 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

删除的是/etc/kubernetes/kubelet.conf

你可能感兴趣的:(容器,kubernetes)