Kubernetes集群部署实录

空降助手

  • 环境准备
    • 服务器配置信息
    • 部署版本信息
    • 关闭防火墙
    • 禁用SELinux
    • 关闭swap
    • 修改hostname
    • 配置hosts文件
  • runtime安装(docker安装)
    • 安装记录
  • kubeadm安装
    • 添加Kubernetes源
    • 安装kubelet、kubeadm、kubectl
    • 启动kubelet
    • kube镜像下载
      • 查看集群所需镜像版本
      • 编辑镜像下载脚本并执行
  • 集群创建
    • 主节点初始化(只在master执行)
    • 配置k8s用户连接(只在master执行)
    • 添加环境变量
    • 安装网络插件(只在master执行)
    • 加入node节点(只在node执行)
      • 同步时间
      • 安装docker、kubelet、kubeadm、kubectl
      • 加入集群
  • 参考文档:

环境准备

服务器配置信息

主机名 配置 IP
k8s-master 8核/16GiB/CentOS 7.8.2003 64位 172.16.0.63
k8s-node1 64核/2TiB/CentOS 7.9 64位 (物理机) 172.16.0.30
k8s-node2 8核/16GiB/CentOS 7.8.2003 64位 172.16.0.88
k8s-node3 8核/16GiB/CentOS 7.8.2003 64位 172.16.0.32

部署版本信息

工具 版本
docker 20.10.16
Kubernetes 1.23.6

关闭防火墙

参考命令:

[root@k8s-master ~] systemctl stop firewalld
[root@k8s-master ~] systemctl disable firewalld
[root@k8s-master ~] iptables -F

确认防火墙已关闭:

[root@k8s-master ~] systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

禁用SELinux

[root@k8s-master ~] sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
[root@k8s-master ~] sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

查看SELinux状态:

[root@k8s-master ~] setenforce 0
setenforce: SELinux is disabled

关闭swap

/etc/fstab文件中所有设置为swap的设备关闭

[root@k8s-master ~] swapoff -a

备份/etc/fstab文件

[root@k8s-master ~] cp /etc/fstab /etc/fstab.bak

/etc/fstab中swap一行删除

[root@k8s-master ~] sed -ri 's/.*swap.*/#&/' /etc/fstab

/etc/fstab的所有内容重新加载

[root@k8s-master ~] mount -a

修改hostname

编辑/etc/hostname文件,可永久修改hostname
推荐使用命令,临时修改hostname

[root@k8s-master ~] hostname k8s-master # hostname your_hostname

配置hosts文件

编辑/etc/hosts,添加服务器IP和hostname

172.16.0.63     k8s-master
172.16.0.30     k8s-node1
172.16.0.88     k8s-node2
172.16.0.32     k8s-node3

runtime安装(docker安装)

k8s仅是容器编排工具,底层需要依赖一个容器的runtime工具,其支持的runtime包括:docker、containerd、CRI-O,其中docker应用最广,但是在k8s生态里将来的命运有些不确定,本次安装还是使用docker。
可以直接参照centos版docker官方文档中的步骤安装,文档给出了三种安装方法,推荐使用Install using the repository这一节的方式。

一些需要注意的地方

  • 不同docker版本支持k8s版本不同,安装docker之前,先确定k8s的版本,查一下版本对应关系。参考文档:Kubernetes各依赖包适配(release-1.23)其中1.23适配的docker版本是20.10,最新的1.24和1.25中都没查到docker的依赖…所以本次docker就装20.10.16
  • “Set up the repository”中的内容可以使用以下命令代替(非必须):
[root@k8s-master ~] wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

安装记录

 # 清除旧版本docker
[root@k8s-master ~] yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
# yum源中添加docker下载镜像
[root@k8s-master ~] yum install -y yum-utils
[root@k8s-master ~] yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装指定版本docker
[root@k8s-master ~] yum install -y docker-ce-20.10.16 docker-ce-cli-20.10.16 containerd.io docker-compose-plugin
# 启动docker
[root@k8s-master ~] systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# 配置docker镜像源为国内源
[root@k8s-master ~] tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
EOF
# 重启docker
[root@k8s-master ~] systemctl restart docker

kubeadm安装

kubeadm官网,可供参考

添加Kubernetes源

[root@k8s-master ~] cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubelet、kubeadm、kubectl

[root@k8s-master ~] yum install -y --setopt=obsoletes=0 kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6

启动kubelet

[root@k8s-master ~] systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

说明:

  • obsoletes等于1表示更新旧的rpm包的同时会删除旧包,0表示更新旧的rpm包不会删除旧包
  • kubelet启动后,可以用命令journalctl -f -u kubelet查看kubelet更详细的日志
  • kubelet默认使用systemd作为cgroup driver
  • 启动后,kubelet现在每隔几秒就会重启,因为它陷入了一个等待kubeadm指令的死循环

kube镜像下载

查看集群所需镜像版本

[root@k8s-master ~] kubeadm config images list
I0914 10:26:03.481454   18303 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.23
k8s.gcr.io/kube-apiserver:v1.23.10
k8s.gcr.io/kube-controller-manager:v1.23.10
k8s.gcr.io/kube-scheduler:v1.23.10
k8s.gcr.io/kube-proxy:v1.23.10
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6

编辑镜像下载脚本并执行

根据查到的镜像版本,编辑镜像下载脚本中镜像版本号(node节点只需要kube-proxy和pause)

[root@k8s-master ~] tee ./images.sh <<'EOF'
#!/bin/bash

images=(
kube-apiserver:v1.23.10
kube-controller-manager:v1.23.10
kube-scheduler:v1.23.10
kube-proxy:v1.23.10
pause:3.6
etcd:3.5.1-0
coredns:v1.8.6
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
EOF
[root@k8s-master ~] bash images.sh 

集群创建

用kubeadm创建集群 - 官方文档
官方文档中的本部分内容,啰嗦的太多,大部分又没什么用。所以,如果你是第一次安装,建议以本文档为主;如果不是第一次安装,且有一些配置修改的需求,仍以官方文档为主。

主节点初始化(只在master执行)

[root@k8s-master ~] kubeadm init \
--apiserver-advertise-address=172.16.0.63 \
--control-plane-endpoint=k8s-master \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks

参数说明:

  • apiserver-advertise-address:集群通告地址,API Server监听的IP地址,填本机IP,不清楚IP的可以先用命令ip a查询
  • image-repository:由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
  • kubernetes-version: k8s版本,与前面安装的保持一致
  • service-cidr :集群内部虚拟网络IP地址范围,Pod统一访问入口。默认10.96.0.0/12,可直接填写,非必要参数
  • pod-network-cidr Pod:指定pod网络的IP地址范围。默认10.244.0.0/16,可直接填写
  • 可以使用参数--v=6--v=10等查看详细的日志
  • 所有参数的网络不能重叠。比如192.168.2.x和192.168.3.x是重叠的
    如果init失败,可以用以下命令回退:
[root@k8s-master ~] kubeadm reset -f
[root@k8s-master ~] rm -rf /etc/kubernetes
[root@k8s-master ~] rm -rf /var/lib/etcd/
[root@k8s-master ~] rm -rf $HOME/.kube # centos下没必要

执行成功后,将join k8s cluster command 记录下来,node节点加入k8s cluster需要使用此命令
Kubernetes集群部署实录_第1张图片

配置k8s用户连接(只在master执行)

拷贝kubectl使用的连接k8s认证文件到默认路径
根据init时的提示执行如下指令

[root@k8s-master ~] mkdir -p $HOME/.kube
[root@k8s-master ~] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~] sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl会读取该配置文件

添加环境变量

如果使用root安装部署k8s,一定要先按照启动脚本最后的提示,添加KUBECONFIG环境变量。非常重要

[root@k8s-master ~] export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络插件(只在master执行)

根据提示,需要安装网络插件,用于pod之间的通信,网络组件有很多可选项,比如calico、flannel,本次部署选择官方推荐的flannel
安装过程比较久,且没有任何进度提醒,我一度以为被墙失败了,执行这个命令的时候,需要多等等。

[root@k8s-master ~] kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

此时查看master的状态:

[root@k8s-master k8s_install] kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   64m   v1.23.6
[root@k8s-master k8s_install] kubectl get pods
No resources found in default namespace.
[root@k8s-master k8s_install] kubectl get ns
NAME              STATUS   AGE
default           Active   64m
kube-flannel      Active   2m6s
kube-node-lease   Active   64m
kube-public       Active   64m
kube-system       Active   64m
[root@k8s-master ~] kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-tmhs4                1/1     Running   0          4m16s
kube-system    coredns-6d8c4cb4d-84btc              1/1     Running   0          66m
kube-system    coredns-6d8c4cb4d-mnlhl              1/1     Running   0          66m
kube-system    etcd-k8s-master                      1/1     Running   0          66m
kube-system    kube-apiserver-k8s-master            1/1     Running   0          66m
kube-system    kube-controller-manager-k8s-master   1/1     Running   0          66m
kube-system    kube-proxy-z965w                     1/1     Running   0          66m
kube-system    kube-scheduler-k8s-master            1/1     Running   0          66m

加入node节点(只在node执行)

同步时间

其实在master部署的时候就需要确认NTP服务可用且时间同步正确,其余node保持和master用同一个源就可以了。

  1. 查看master的NTP源主机:
[root@k8s-master ~] timedatectl # 查看NTP状态
      Local time: Thu 2022-09-15 09:59:08 CST
  Universal time: Thu 2022-09-15 01:59:08 UTC
        RTC time: Thu 2022-09-15 01:59:09
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
[root@k8s-master ~] timedatectl status # 查看NTP状态
      Local time: Thu 2022-09-15 09:59:25 CST
  Universal time: Thu 2022-09-15 01:59:25 UTC
        RTC time: Thu 2022-09-15 01:59:26
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
[root@k8s-master ~] chronyc sources # 查看NTP源
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 172.19.0.201                  3  10   377   596   +126us[ +169us] +/-   43ms
  1. 修改node的NTP源主机
[root@wnccqo6a ~] ntpdate 172.19.0.201
15 Sep 02:01:58 ntpdate[59362]: the NTP socket is in use, exiting

也可以vim /etc/ntp.conf,编辑ntp配置

# /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help
driftfile /var/lib/ntp/ntp.drift
# Enable this if you want statistics to be logged.
#statsdir /var/log/ntpstats/
statistics loopstats peerstats clockstats
filegen loopstats file loopstats type day enable
filegen peerstats file peerstats type day enable
filegen clockstats file clockstats type day enable
# By default, exchange time with everybody, but don't allow configuration.
restrict -4 default kod notrap nomodify nopeer noquery
restrict -6 default kod notrap nomodify nopeer noquery
# Local users may interrogate the ntp server more closely.
restrict 127.0.0.1
restrict ::1
# Remote NTP server(s) to synchronize with.
server 172.19.0.201 # 重点确认这个
# Specify the internal hardware clock as a reference clock.
# Set a high stratum so this is only used if all external clocks fail.
# This will mitigate skew until external clocks return to service.
server 127.127.1.0 # local clock address
fudge  127.127.1.0 stratum 10
  1. 启动node的NTP服务
[root@wnccqo6a ~] systemctl start ntpd
[root@wnccqo6a ~] systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@wnccqo6a ~] systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2022-09-15 02:00:56 UTC; 6s ago
  Process: 59309 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 59310 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─59310 /usr/sbin/ntpd -u ntp:ntp -g

Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listen normally on 3 eth0 169.254.3.1 UDP 123
Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listen normally on 4 eth3 172.16.0.30 UDP 123
Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listen normally on 5 lo ::1 UDP 123
Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listen normally on 6 eth3 fe80::3eec:efff:fef9:e6d4 UDP 123
Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listen normally on 7 eth0 fe80::b23a:f2ff:feb6:59f UDP 123
Sep 15 02:00:56 k8s-node1 ntpd[59310]: Listening on routing socket on fd #24 for interface updates
Sep 15 02:00:56 k8s-node1 ntpd[59310]: 0.0.0.0 c016 06 restart
Sep 15 02:00:56 k8s-node1 ntpd[59310]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Sep 15 02:00:56 k8s-node1 ntpd[59310]: 0.0.0.0 c011 01 freq_not_set
Sep 15 02:00:58 k8s-node1 ntpd[59310]: 0.0.0.0 c514 04 freq_mode
  1. 切换node的时区
[root@wnccqo6a ~] timedatectl # 查看时区
Warning: Ignoring the TZ variable. Reading the system's time zone setting only.

      Local time: Thu 2022-09-15 02:07:35 UTC
  Universal time: Thu 2022-09-15 02:07:35 UTC
        RTC time: Thu 2022-09-15 02:07:35
       Time zone: UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a
[root@wnccqo6a ~] timedatectl set-timezone 'Asia/Shanghai' # 切换时区
[root@wnccqo6a ~] timedatectl # 查看时区是否切换成功
Warning: Ignoring the TZ variable. Reading the system's time zone setting only.

      Local time: Thu 2022-09-15 10:08:49 CST
  Universal time: Thu 2022-09-15 02:08:49 UTC
        RTC time: Thu 2022-09-15 02:08:49
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

安装docker、kubelet、kubeadm、kubectl

跟master步骤一样,保持版本一致,不再赘述

  • 小tips: docker pull镜像的时候,如果node不能访问外网,可以把master的镜像save出来,导入node中load使用

加入集群

在node上安装完毕、镜像也下载以后,执行join命令即可

[root@wnccqo6a k8s_install] kubeadm join k8s-master:6443 --token begohi.8petuzcxv8rq0w3uf --discovery-token-ca-cert-hash sha256:7f2750da4f4qwe8668ad3a2beace8a16ddf3bf71cbb030bf7e2972f29745186 --v=5

加入成功后,脚本将输出如下内容:

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0915 15:10:57.363064  107895 cert_rotation.go:137] Starting client certificate rotation controller
I0915 15:10:57.363488  107895 kubelet.go:218] [kubelet-start] preserving the crisocket information for the node
I0915 15:10:57.363509  107895 patchnode.go:31] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master上也可以查到node:

[root@gfaa3wjs k8s_install]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   24h   v1.23.6
k8s-node1    Ready    <none>                 37m   v1.23.6

后面的两个node就不重复了。
部署的时候当然也遇到了一些问题,总结在这里:
Kubernetes集群部署踩坑记录

部署过程实属不易,还好大哥耐心答疑解惑,最后感谢大哥的鼎力支持!哈哈哈

参考文档:

Kubernetes(k8s) 1.23.6版本基于Docker的集群安装部署
Kubernetes简介及部署教程
Kubernetes部署-最详细实操可行
centos7 kubeadm部署单机k8s
kubelet cgroup driver: \“systemd\“ is different from docker cgroup driver: \“cgroupfs\“

你可能感兴趣的:(微服务架构,kubernetes部署学习,kubernetes,docker,容器)