K8s ❉ node节点未找到cni0/flannel.1网络

问题描述:

        现有测试环境有3台k8s服务器,现在新添加两台服务器,发现新加的服务器flannel.1和cni0网卡没有生成

[root@slave1 ~]# ifconfig
docker0: flags=4099  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:99:7b:ac:77  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.247.137  netmask 255.255.255.0  broadcast 192.168.247.255
        inet6 fe80::e506:6851:fb1e:4f55  prefixlen 64  scopeid 0x20
        inet6 fe80::5082:19ec:92f7:a4f  prefixlen 64  scopeid 0x20
        ether 00:0c:29:40:b9:ac  txqueuelen 1000  (Ethernet)
        RX packets 1139  bytes 355392 (347.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 876  bytes 118103 (115.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

排查

1 检查master节点的flannel服务

[root@master ~]# kubectl get pods -n kube-system | grep flannel
kube-flannel-ds-amd64-9ccf7      1/1     Running            0          7m23s
kube-flannel-ds-amd64-j8q9w      1/1     Running            0          7m23s
kube-flannel-ds-amd64-t6wxg      1/1     Running            0          7m23s

        如果master节点没有找到对应的flannel服务,执行kubeadm reset,并重新生成flannel服务

kubeadm reset
rm -rf /etc/kubernetes/admin.conf 
rm -rf $HOME/.kube/config

2 重置flannel网络

(1)删除node节点(master操作)

kubectl delete node xxx

(2)node 节点删除cni和flannel网卡(node操作)

kubeadm reset

ifconfig cni0 down
ifconfig flannel.1 down
ifconfig del flannel.1
ifconfig del cni0

ip link del flannel.1
ip link del cni0

# 命令执行过程中可能会有报错,有的网卡不存在则忽视
# 后面重新加入后会生成的

(3)加入节点准备工作(master操作)

# 通过 kubedam 重新生成 token
[root@master ~]# kubeadm token create --print-join-command
~~
kubeadm join 192.168.247.136:6443 --token x5phh9.9lpb629032p7dseb     --discovery-token-ca-cert-hash sha256:bd23534d635b46f5316f0d388bd88853a6ddb47b1c04129bf25ea31cdbbfba4a 

# 将文件传输至node
[root@master ~]# scp /etc/kubernetes/admin.conf 
usage: scp [-12346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
           [-l limit] [-o ssh_option] [-P port] [-S program]
           [[user@]host1:]file1 ... [[user@]host2:]file2
[root@master ~]# scp /etc/kubernetes/admin.conf  [email protected]:/etc/kubernetes/admin.conf
[email protected]'s password: 
admin.conf                                                                    100% 5451     3.1MB/s   00:00 

(4)node加入集群

[root@slave1 ~]# kubeadm join 192.168.247.136:6443 --token x5phh9.9lpb629032p7dseb     --discovery-token-ca-cert-hash sha256:bd23534d635b46f5316f0d388bd88853a6ddb47b1c04129bf25ea31cdbbfba4a
~~~

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 加入成功

你可能感兴趣的:(云计算,服务器,kubernetes)