Kubernetes V1.19.3 kubeadm 部署笔记(上)

Kubernetes V1.19.3 kubeadm 部署笔记(上)

本文章是本人按照博客文章:《002.使用kubeadm安装kubernetes 1.17.0》https://www.cnblogs.com/zyxnh... 以及《Kubernetes权威指南》等资料进行部署完成后进行的记录和总结。

本文分三个部分,(上)记录了基础设施环境的前期准备和Master的部署,以及网络插件Flannel的部署。(中)记录了node的部署过程,以及一些必要的容器的部署。(下)介绍一些监控和DevOps的相关内容。

一。前期准备

1. 基础设施层面:

开始采用的是CentOS8.2,但阿里云的源目前还没有适配8版本的内容,不得已换成了CentOS7 2003版,也就是CentOS Linux Release 7.8.2003

一主两从。

[root@k8s-master ~]# cat  /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
2. 必备软件:

都是yum安装的,版本都没有干预,Kubeadm, kubectl和kubelet版本是V1.19.3, Docker 版本是1.13.1。需要注意的是这个集群完全是通过kubeadm创建的,这个命令会作为一条主线贯穿集群搭建的始终。除了kubeadm, kubelet, kubectl, docker外集群的组建全是通过Pod的形式运行。

[root@k8s-master ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg     https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8s-master ~]# yum clean all
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl docker
3. 支持应用:

因为我的系统是最小安装,所以还需要装一些其他的必要支持软件,也都用yum 就可以:bash-completion, strace, vim, wget chrony, ntpdate, net-tools 等。

[root@k8s-master ~]# yum install -y bash-completion strace vim wget chrony ntpdate net-tools
4. 网络情况:

全部主机互相能够通信,并都能够访问公网。我用的虚拟化平台是Virtualbox,虚拟机连通外网走的是enp0s8这块网卡,落到平台上应该是网卡1:NAT,笔记本访问虚拟机和虚拟机之间的通信,走的是enp0s8, 落到往卡上是网卡2:Host-only vboxnet0。

5. 防火墙:

全部主机都应当disable Selinux, 并且关闭firewalld服务。

[root@k8s-master ~]# getenforce
Disabled
[root@k8s-master ~]# systemctl  status firewalld.service
    ● firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
     Docs: man:firewalld(1)
6. 地址转发:

kube-proxy 需要依赖ipvs做service的代理,这里全部主机内核需要开启ipvs的相关功能。

[root@k8s-master ~]# cat > /etc/sysconfig/modules/ipvs.modules <

输出结果如下:

[root@k8s-master ~]# lsmod | grep -E "ip_vs|nf_conntrack_ipv4"
nf_conntrack_ipv4      15053  10
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_i    pv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
7. SWAP分区:

所有主机都应当关闭SWAP分区,因为从1.18版本起,kubernetes不再支持 swap分区,看资料称主要还是性能考虑,交换分区速度慢。
临时关闭:

[root@k8s-master ~]# swapoff -a

永久关闭(当然需要重启生效):

[root@k8s-master ~]# grep swap /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
8. 镜像依赖:

通过命令可以看出当前版本的kubeadm都依赖哪些镜像,先拉取下来。Master就拉取下面结果中的这些,node需
要拉取kube-proxy 和 pause

[root@k8s-master ~]# kubeadm  config images list
k8s.gcr.io/kube-apiserver:v1.19.3
k8s.gcr.io/kube-controller-manager:v1.19.3
k8s.gcr.io/kube-scheduler:v1.19.3
k8s.gcr.io/kube-proxy:v1.19.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
9. 后期完善:

后期完全可以把这些初始化步骤写成一个Shell脚本执行,将各个步骤分别封装成函数,初始化的时候跑一下就可以。

10. 特别注意:

上面的所有步骤需要在集群所有节点(master和所有node)上运行。这里还可以扩展一下,我们完全可以将上述的9初
始化后的操作系统封装成一个镜像,相当于是K8S V1.19.3
的操作系统镜像基线,用于私有云或公有云创建更多的K8S集群使用。

二。部署Master

1. 初始化的方式和要点:

登陆Master上执行这个命令首先需要生成相关的配置文件,如果有就直接根据这个配置跑就可以了(此时建议先不要操作,操作见步骤4)。

[root@k8s-master ~]# kubeadm init --config kubeadm-config.yml

也可以指定相关的参数进行初始化:

[root@k8s-master ~]# kubeadm init --kubernetes-version-v1.19.3 --pod-network-cidr=10.244.0.0/16

这个时候我建议多开几个终端,全部指向master,如果在第一个终端执行命令卡住了,可以在其他的终端里面通过strace -p $PID 查看进行的运行状况,如果有timeout等情况,那一定是出问题了。

2. 飞行前检查:

如果反复执行几次都执行不过去或者没有打印结果,建议执行一下“飞行前检查”

[root@k8s-master ~]# kubeadm init phase preflight

按照打印的结果去处理,例如没有安装docker,etcd路径非空等等。

3. 镜像提前准备:

国内的网络情况是个很现实的问题,先把镜像全部拉下来,修改tag比较好。

[root@k8s-master opt]# for i in kube-apiserver kube-controller-manager kube-scheduler kube-proxy; do echo $i && docker pull kubeimage/$i-amd64:v1.19.3; done
[root@k8s-master opt]# for i in etcd-amd64 pause; do echo $i && docker pull kubeimage/$i-amd64:latest; done

拉取下来修改tag,因为kubeadm对镜像要求很严格,镜像务比要向k8s.gcr.io 靠拢

[root@k8s-master opt]# docker tag cdef7632a242 k8s.gcr.io/kube-proxy:v1.19.3
[root@k8s-master opt]# docker tag 9b60aca1d818 k8s.gcr.io/kube-scheduler:v1.19.3
[root@k8s-master opt]# docker tag aaefbfa906bd k8s.gcr.io/kube-controller-manager:v1.19.3
[root@k8s-master opt]# docker tag bfe3a36ebd25 k8s.gcr.io/coredns:1.7.0

另外还应当注意,网上找了好久都没有etcd:3.4.13-0 这个镜像,于是随便拉了个最新的镜像,手动修改版本,也是为了骗过kubeadm。没有办法,要求太严格了。

[root@k8s-master opt]# docker tag k8s.gcr.io/etcd:v3.4.2    k8s.gcr.io/etcd:v3.4.13-0

之后会发现两个镜像是相同的IMAGE ID,可以将那个不用的删掉,也可以不删,没有关系。

[root@k8s-master opt]# docker rmi docker.io/mirrorgooglecontainers/etcd-amd64
4. 操作初始化:

镜像齐备之后,情况变得十分简单了。直接初始化:

[root@k8s-master ~]# kubeadm init --config kubeadm-config.yml

顺利的话会有屏幕的回显:

[root@k8s-master opt]# kubeadm  init --config kubeadm-config.yml
W1102 22:29:15.980199    6337 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet     connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images     pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [192.168.56.99 kubernetes     kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs     [10.96.0.1 192.168.56.99]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [192.168.56.99 localhost] and     IPs [192.168.56.99 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [192.168.56.99 localhost] and IPs     [192.168.56.99 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/    kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods     from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.503531 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the     "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the     configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node 192.168.56.99 as control-plane by adding the label     "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node 192.168.56.99 as control-plane by adding the     taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in     order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller     automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node     client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable     kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.99:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash     sha256:e0bd52dc0916972310f01a26c3e742aef11fe6088550749e7281b4edca795e7e

这个回显内容可以记下来,日后还会用到,尤其是最后一行,是node节点加入集群的命令。

5. 检查node和pods:
[root@k8s-master opt]# kubectl get nodes
NAME            STATUS     ROLES    AGE     VERSION
192.168.56.99   NotReady      master   30m       v1.19.3

pod的情况因为DNS的pod都分配在node上,而node目前还未部署,所以相关的pod状态是等待,其他pod状态都是运行中。

[root@k8s-master opt]# kubectl get pod --all-namespaces
NAMESPACE              NAME                                         READY   STATUS        RESTARTS   AGE
kube-system            coredns-f9fd979d6-dn7v4                      1/1     Pending       1          4d22h
kube-system            coredns-f9fd979d6-mh2pn                      1/1     Pending       1          4d22h
kube-system            etcd-192.168.56.99                           1/1     Running       1          4d22h
kube-system            kube-apiserver-192.168.56.99                 1/1     Running       1          4d22h
kube-system            kube-controller-manager-192.168.56.99        1/1     Running       3          4d22h
kube-system            kube-proxy-fkbfn                             1/1     Running       1          4d21h
kube-system            kube-scheduler-192.168.56.99                 1/1     Running       2          4d22h
6. 安装网络插件Flannel:

其实开始我对flannel也不是很了解,这次重新搭建K8S发现flannel的如下特点:
(1)是一张虚拟出来的覆盖网络(overlay)。
(2)作用是管理容器的网络地址,确保网络地址唯一,并且容器间可以通信。
(3)数据传送过程中,他可以在数据包外面再进行一次封装,数据包到达后解封装,也就是说他保证了数据包的完整性。
(4)他在宿主新建了虚拟网卡,网卡通过桥接docker的网卡进行通信,也就是说他的运行对docker是强依赖的。
(5)etcd保证了所有node上flannel看到的配置的一致性。每个flannel也在监听etcd上网络相关状况的变化。

知道了这些概念后,用YML文件去创建。

[root@k8s-master opt]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master opt]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

这里要注意的是,pod子网的网络配置要和kubeadm-config.yaml 中podSubnet 的网络配置一致。

[root@k8s-master opt]# grep podSubnet /opt/kubeadm-config.yml
podSubnet: 10.244.0.0/16 # flannel网络
[root@k8s-master opt]# grep Network /opt/kube-flannel.yml
hostNetwork: true
  "Network": "10.244.0.0/16",
  hostNetwork: true
7. master 配置完成。
[root@k8s-master opt]# kubectl get nodes
NAME            STATUS     ROLES    AGE     VERSION
192.168.56.99   Ready      master   4d22h   v1.19.3

你可能感兴趣的:(k8s,flannel,linux,devops,centos)