测试环境好久没有使用了,启动kubelet发现失败了,查看状态,每看到具体报错点:
[root@node1 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2023-08-03 22:24:50 CST; 5s ago
Docs: https://kubernetes.io/docs/
Process: 2651 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 2651 (code=exited, status=1/FAILURE)Aug 03 22:24:50 node1 kubelet[2651]: Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_3DES_E...
Aug 03 22:24:50 node1 kubelet[2651]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS...
Aug 03 22:24:50 node1 kubelet[2651]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file....
Aug 03 22:24:50 node1 kubelet[2651]: --topology-manager-policy string Topology Manager policy to use. Possible values: 'none', '...
Aug 03 22:24:50 node1 kubelet[2651]: --topology-manager-scope string Scope to which topology hints applied. Topology Manager co...
Aug 03 22:24:50 node1 kubelet[2651]: -v, --v Level number for the log level verbosity
Aug 03 22:24:50 node1 kubelet[2651]: --version version[=true] Print version information and quit
Aug 03 22:24:50 node1 kubelet[2651]: --vmodule pattern=N,... comma-separated list of pattern=N settings for fi...g format)
Aug 03 22:24:50 node1 kubelet[2651]: --volume-plugin-dir string The full path of the directory in which to search for addi...
Aug 03 22:24:50 node1 kubelet[2651]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the ...
Hint: Some lines were ellipsized, use -l to show in full.
看看日志吧:journalctl -xu kubelet
Aug 03 22:05:14 node1 kubelet[1391]: Error: failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such
Aug 03 22:05:14 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Aug 03 22:05:14 node1 kubelet[1391]: Usage:
Aug 03 22:05:14 node1 kubelet[1391]: kubelet [flags]
Aug 03 22:05:14 node1 kubelet[1391]: Flags:
Aug 03 22:05:14 node1 kubelet[1391]: --add-dir-header If true, adds the file directory to the header of the log mes
Aug 03 22:05:14 node1 kubelet[1391]: --address ip The IP address for the Kubelet to serve on (set to '0.0.0.0'
Aug 03 22:05:14 node1 kubelet[1391]: --allowed-unsafe-sysctls strings Comma-separated whitelist of unsafe sysctls or unsafe sysctl
Aug 03 22:05:14 node1 kubelet[1391]: --alsologtostderr log to standard error as well as files (DEPRECATED: will be r
Aug 03 22:05:14 node1 kubelet[1391]: --anonymous-auth Enables anonymous requests to the Kubelet server. Requests th
Aug 03 22:05:14 node1 systemd[1]: Unit kubelet.service entered failed state.
Aug 03 22:05:14 node1 kubelet[1391]: --application-metrics-count-limit int Max number of application metrics to store (per container) (d
Aug 03 22:05:14 node1 kubelet[1391]: --authentication-token-webhook Use the TokenReview API to determine authentication for beare
Aug 03 22:05:14 node1 kubelet[1391]: --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authen
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-mode string Authorization mode for Kubelet server. Valid options are Alwa
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook
Aug 03 22:05:14 node1 systemd[1]: kubelet.service failed.
Aug 03 22:05:14 node1 kubelet[1391]: --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webho
Aug 03 22:05:14 node1 kubelet[1391]: --azure-container-registry-config string Path to the file containing Azure container registry configur
google了下,应该是证书过期了。
[root@node1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Jul 14, 2023 15:36 UTCca no
apiserver Jul 14, 2023 15:36 UTCca no
apiserver-etcd-client Jul 14, 2023 15:36 UTCetcd-ca no
apiserver-kubelet-client Jul 14, 2023 15:36 UTCca no
controller-manager.conf Jul 14, 2023 15:36 UTCca no
etcd-healthcheck-client Jul 14, 2023 15:36 UTCetcd-ca no
etcd-peer Jul 14, 2023 15:36 UTCetcd-ca no
etcd-server Jul 14, 2023 15:36 UTCetcd-ca no
front-proxy-client Jul 14, 2023 15:36 UTCfront-proxy-ca no
scheduler.conf Jul 14, 2023 15:36 UTCca no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jul 11, 2032 15:36 UTC 8y no
etcd-ca Jul 11, 2032 15:36 UTC 8y no
front-proxy-ca Jul 11, 2032 15:36 UTC 8y no
重新生成吧:
[root@node1 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[renew] Error reading configuration from the Cluster. Falling back to default configurationcertificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewedDone renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@node1 ~]#
再次验证:
[root@node1 ~]# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[check-expiration] Error reading configuration from the Cluster. Falling back to default configurationCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Aug 02, 2024 14:59 UTC 364d ca no
apiserver Aug 02, 2024 14:59 UTC 364d ca no
apiserver-etcd-client Aug 02, 2024 14:59 UTC 364d etcd-ca no
apiserver-kubelet-client Aug 02, 2024 14:59 UTC 364d ca no
controller-manager.conf Aug 02, 2024 14:59 UTC 364d ca no
etcd-healthcheck-client Aug 02, 2024 14:59 UTC 364d etcd-ca no
etcd-peer Aug 02, 2024 14:59 UTC 364d etcd-ca no
etcd-server Aug 02, 2024 14:59 UTC 364d etcd-ca no
front-proxy-client Aug 02, 2024 14:59 UTC 364d front-proxy-ca no
scheduler.conf Aug 02, 2024 14:59 UTC 364d ca noCERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Jul 11, 2032 15:36 UTC 8y no
etcd-ca Jul 11, 2032 15:36 UTC 8y no
front-proxy-ca Jul 11, 2032 15:36 UTC 8y no
但发现还是没有:/etc/kubernetes/bootstrap-kubelet.conf 继续执行
$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} /etc/kubernetes/pki/backup1 一定要mv走
$ kubeadm init --apiserver-advertise-address=192.168.56.101 phase certs all
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} /etc/kubernetes/backup1 一定要mv走
$ kubeadm init --apiserver-advertise-address=192.168.56.101 phase kubeconfig all
$ reboot
$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
还需要将ca.crt拷贝到其它节点: google/baidu上不知道为什么都漏了这一步
[root@node1 kubernetes]# scp -rp kubelet.conf node2:/etc/kubernetes
[root@node1 pki]# scp -rp pki/ca.crt node2:/etc/kubernetes/pki
$ scp -rp /etc/kubernetes/admin.conf node2:/root/.kube/config
验证一下: