Kubernetes V1.19.3 kubeadm 部署笔记(中)

本文章是本人按照博客文章:《002.使用kubeadm安装kubernetes 1.17.0》https://www.cnblogs.com/zyxnh... 以及《Kubernetes权威指南》等资料进行部署完成后进行的记录和总结。

本文分三个部分,(上)记录了基础设施环境的前期准备和Master的部署,以及网络插件Flannel的部署。(中)记录了node的部署过程,以及一些必要的容器的部署。(下)介绍一些监控和DevOps的相关内容。

三。部署node

此处当然建议通过ansible等配置管理工具统一进行部署。我的文档只拿一台node来举例子。

1. 加入集群:kubeadm join命令
[root@k8s-master opt]# kubeadm join 192.168.56.99:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:ef543a59aac99e4f86e6661a7eb04d05b8a03269c7784378418ff3ff2a2d2d3c
2. 看结果:

成功了的话屏幕的回显应该是这样的,如果失败了,检查一下原因。

This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

我这边就失败了,通过两个终端检查的办法终于发现,还是老问题,镜像拉取卡住。我原本以为镜像只在master上拉取过就够了,事实上每台node都应该有相同的镜像。后来想明白了,镜像是通过文件系统的方式,对象存储来体现,一台机器上有这个文件对象不代表另一台机器上也有,另外不管K8S还是Docker都没有镜像同步的机制。

[root@k8s-master opt]# docker pull kubeimage/kube-proxy-amd64:v1.19.3
[root@k8s-master opt]# docker pull kubeimage/pause-amd64:3.2

这就看出来了,上面说的容器镜像应当封装到操作系统镜像中比较好。私有云的Openstack或者VMVware vSphere,公有云的阿里云等等都支持自定义镜像的,一下子快好多。不光是镜像本地化,那些通用的配置也都不需要反复地配置了,节省了很多生产力。

3. 部署Node时遇到的问题。

(1)kubelet无法启动。通过查看系统服务状态发现该服务始终处于Activating 状态,而不是running. 检查了message 后发现kubelet.service 启动时会检查swap分区,分区存在的话无法启动。手动关闭。
日志写得很清楚:

Nov  6 09:04:24 k8s-node01 kubelet: F1106 09:04:24.680751    2635 server.go:265] failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/dm-1                               partition#011839676#0110#011-2]
4. 部署node完成
[root@k8s-master opt]# kubectl  get nodes
NAME            STATUS     ROLES    AGE     VERSION
192.168.56.99   Ready      master   4d22h   v1.19.3
k8s-node01      NotReady      4d22h   v1.19.3

四。部署必要容器

1. kubernetes-dashboard是整个集群的web管理界面,对于日后管理集群来说是必备的。
[root@k8s-master opt]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

我们先用标准配置去apply,有什么问题再去修改。

[root@k8s-master opt]# kubectl apply -f recommand.yml
amespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

如果有问题,屏显会告诉你哪一个资源没有创建成功,再去修改和处理。

kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76585494d8-5jtvw   1/1     Running   0          86s
kubernetes-dashboard-b7ffbc8cb-4xcdp         1/1     Running   0          86s
2. 端口暴露:

这时有一个问题,如何把dashboard封装一下供我的电脑去访问呢?这个dashboard只有一个虚拟的ClusterIP(既无法直接访问,也无法PING通),还有一个podIP,也一样是无法直接访问。最终的答案实际上是在dashboard.yml文件中增加一些配置,将Service的ClusterIP 转换到虚拟机的NodeIP上来,通过NodeIP:NodePort的方式来访问这个服务。

[root@k8s-master opt]# vim recommended.yaml 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # 此处增加一行,访问方式为NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32443 # 此处增加一行,指定NodePort的具体值
  selector:
    k8s-app: kubernetes-dashboard

把原来的一套dashboard资源全砸了,重新建。

[root@k8s-master opt]# kubectl delete -f recommended.yaml 
namespace "kubernetes-dashboard" deleted
serviceaccount "kubernetes-dashboard" deleted
service "kubernetes-dashboard" deleted
secret "kubernetes-dashboard-certs" deleted
secret "kubernetes-dashboard-csrf" deleted
secret "kubernetes-dashboard-key-holder" deleted
configmap "kubernetes-dashboard-settings" deleted
role.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrole.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
deployment.apps "kubernetes-dashboard" deleted
service "dashboard-metrics-scraper" deleted
deployment.apps "dashboard-metrics-scraper" deleted

[root@k8s-master opt]# kubectl apple -f recommended.yaml

这样就可以访问了:https://192.168.56.99:32443

3. dashboard的token和权限

打开页面后会问你是通过token还是kubeconfig来验证身份。选kubeconfig的话相当于你本地有一套kubeadmin-config 的配置文件,通过浏览器上传去做验证,这个很简单。

token的方式按如下方式去取,打印出来的结果粘贴到浏览器中:

[root@k8s-master opt]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secret -n kubernetes-dashboard |grep  kubernetes-dashboard-token | awk '{print $1}') |grep token | awk '{print $2}'

登陆进来发现很多东西都看不到,那是因为dashboard默认权限较低。

[root@k8s-master opt]# vim  recommended.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
  #name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

砸了,再重新部署dashboard,当然登录时token一定是变了的。

3. 部署kubernetes-dashboard时遇到的问题,以及值得提到的一些点。

(1)我在apply的时候发现kubernetes-dashboard 容器一直在反复的崩溃,处于CrashLoopBackOff 这个状态。
经过各种方式去查看和排查,例如 kubectl describe pod kubernetes-dashboard-b7ffbc8cb-4xcdp -n kubernetes-dashboard 可以查看容器的运行状况,例如到node上去docker logs $CONTAINERID,在例如去看kubelet的message,最后发现是网络通信方面的问题,容器无法和Service的ClusterIP 10.96.0.1的网络通信。

Nov  3 23:48:54 k8s-node01 journal: panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: no route to host

看了看路由的确是有的,但报错也是提示里有相关的问题。后来发现firewalld.service 的服务没有关。关闭后容器创建就成功了。

(2)dashboard的作者单建了一个namespace,区别于旧版本的统一使用kube-system

(3)quay.io镜像可以用,而且拉取非常快。

你可能感兴趣的:(k8s,docker,linux,namespaces,运维)