02. kubeadm部署k8s

1. k8s安装部署介绍

1.1 部署工具

  • 使用批量部署工具, 如(ansible/saltstack)
  • 手动二进制
  • kubeadm
  • apt/yum等方式安装, 以守护进程的方式在宿主机上, 类似于nginx一样, 使用service脚本启动

1.2 kubeadm方式部署

官方文档: https://kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/

1.11版本开始逐渐稳定, 可以在生产环境使用. 目前最新版为1.25

使⽤k8s官⽅提供的部署⼯具kubeadm⾃动安装,需要在master和node节点上安装docker等组件,然后初始化,把管理端的控制服务和node上的服务都以pod的⽅式运行

以pod方式运行的服务:
  master节点: kube-controller-manager, kube-scheduler, kube-apiserver
  node节点: kube-proxy

以二进制方式直接运行:
  node节点: kubelet

master节点:
  不运行业务容器, 仅负责进群调度管理
node节点:
  运行业务容器

2. kubeadm介绍

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm/

2.1 安装前注意事项

禁⽤swap,selinux(CentOS),iptables,并优化内核参数及资源限制参数

2.2 部署过程

1. 基础环境准备: 禁⽤swap,selinux(CentOS),iptables,并优化内核参数及资源限制参数
2. 部署harbor及haproxy+keepalived⾼可⽤反向代理
3. 在master节点和node节点安装docker
4. 在所有master安装指定版本的kubeadm 、kubelet、kubectl、docker
5. 在所有node节点安装指定版本的kubeadm 、kubelet、docker,在node节点kubectl为可选安装
看是否需要在node执⾏kubectl命令进⾏集群管理及pod管理等操作
6. master节点运⾏kubeadm init初始化命令
7. 验证master节点状态
8. 在node节点使⽤kubeadm命令将⾃⼰加⼊k8s master(需要使⽤master⽣成的token和ca公钥进行认证) 
9. 验证node节点状态
10. 创建pod并测试⽹络通信
11. 部署web服务Dashboard
12. k8s集群升级案例

3. 基础环境准备

3.1 服务器环境

最⼩化安装基础系统,关闭防⽕墙, selinux和swap,更新软件源、时间同步、安装常⽤命令,重启后验证基础配置,Centos 推荐使⽤Centos 7.5及以上的系统,Ubuntu推荐18.04及以上稳定版.

master节点: 提供api-server, 需要实现高可用, 通过负载均衡器调度, 一般使用3个节点即可. k8s的master不像redis等其他的集群服务的master节点有数量要求.

使用kubeadm部署k8s时, master节点最多只能down一个. 测试环境如果想节约资源, 可以使用一个master节点进行控制, 出现问题及时修复即可.

node节点: node节点包含两个组件, kube-proxy和kubelet, 这两个组件也是通过负载均衡器去访问master节点的api-server.

LB: 负载均衡器可以搭建两个, 一个给管理员提供api-server访问, 一个给node节点提供api-server访问.

实际工作中:

node节点: 负责运行业务镜像, 所以配置要求较高, 最好是物理机, 一般256G+2核以上配置
master节点, harbor和负载均衡: 可以是虚拟机, 也可以是物理机, 但最好还是物理机

本文实验环境:

master节点: 
  master-19: 10.0.0.19
  master-29: 10.0.0.29
  master-39: 10.0.0.39
HAproxy:
  ha-1: 10.0.0.49
  ha-2: 10.0.0.59
Harbor:
  harbor: 10.0.0.69
node节点:
  node-79: 10.0.0.79
  node-89: 10.0.0.89
  node-99: 10.0.0.99

3.1.1 禁用交换分区

k8s不支持交换分区, 一旦不禁用, 那么k8s是无法部署的. 虽然可以跳过交换分区检测, 但是不推荐. 如果内存不够, 就加内存.

如果大量使用交换分区, 那么数据会写到磁盘上, 这时服务器的性能就会降低.

在master和node节点禁用swap分区

#1. 禁用交换分区: 全部主机

ansible all -m shell -a "sed -r -i.bak '/swap/s@(.*)@#\1@' /etc/fstab"
vim /etc/fstab

# swap was on /dev/sda5 during installation
#UUID=ba6d2bf8-da50-4041-9920-b2b1e6f46626 none            swap    sw              0       0  

reboot

3.1.2 内核参数调整

master以及node节点都需要执行

#2. 内核参数调整: 全部主机

ansible all -m copy -a "src=limits.conf dest=/etc/security/limits.conf backup=yes"
ansible all -m copy -a "src=sysctl.conf dest=/etc/sysctl.conf backup=yes"

内核参数调整:

# 资源限制调整

vim /etc/security/limits.conf

*                soft    core            unlimited
*                hard    core            unlimited
*                soft    nproc           1000000
*                hard    nproc           1000000
*                soft    nofile          1000000
*                hard    nofile          1000000
*                soft    memlock         32000
*                hard    memlock         32000
*                soft    msgqueue        8192000
*                hard    msgqueue        8192000  
# 内核参数调整

# 开启宿主机的ipv4_forward, 让宿主机开启路由转发功能, 否则容器只能在宿主机内部通信. 

vim /etc/sysctl.conf

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296




# TCP kernel paramater
net.ipv4.tcp_mem = 786432 1048576 1572864
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        16384   4194304
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1

# socket buffer
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 20480
net.core.optmem_max = 81920


# TCP conn
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15

# tcp conn reuse
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 1


net.ipv4.tcp_max_tw_buckets = 20000
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_timestamps = 1 #?
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syncookies = 1

# keepalive conn
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.ip_local_port_range = 10001    65000

# swap
vm.overcommit_memory = 0
vm.swappiness = 10

#net.ipv4.conf.eth1.rp_filter = 0
#net.ipv4.conf.lo.arp_ignore = 1
#net.ipv4.conf.lo.arp_announce = 2
#net.ipv4.conf.all.arp_ignore = 1
#net.ipv4.conf.all.arp_announce = 2

禁⽤swap,selinux,iptables,并优化内核参数及资源限制参数, 全部修改完, 重启服务器生效

#3. 重启生效
ansible all -m shell -a "reboot"
reboot

3.2 haproxy+keepalived部署

如果公司本身就有haproxy+keepalived, 不想再重新部署, 那么可以使用已有的. 但是, 建议最好配置单独的负载均衡, 不要多个业务共有一个负载均衡

这里仅在ha1-49-10.0.0.49上部署

root@ha1-49:~# apt -y install haproxy keepalived
vip: 10.0.0.188
node节点通过10.0.0.188访问master节点
haproxy监听在10.0.0.188的tcp端口, 这里不能用http代理, 因为通信过程中会有很多证书验证
image.png

3.2.1 配置keepalived的vip

root@ha1-49:~# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived                                                                                                                          

global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.188 dev eth0 label eth0:0
    }
}
root@ha1-49:~# systemctl restart keepalived
root@ha1-49:~# ifconfig 
eth0: flags=4163  mtu 1500
        inet 10.0.0.49  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe49:4ce4  prefixlen 64  scopeid 0x20
        ether 00:0c:29:49:4c:e4  txqueuelen 1000  (Ethernet)
        RX packets 4676  bytes 3176356 (3.1 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2672  bytes 552760 (552.7 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0:0: flags=4163  mtu 1500
        inet 10.0.0.188  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 00:0c:29:49:4c:e4  txqueuelen 1000  (Ethernet)

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 203  bytes 16161 (16.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 203  bytes 16161 (16.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

3.2.2 配置haproxy的监听

root@ha1-49:~# vim /etc/haproxy/haproxy.cfg 

listen k8s-6443
    bind 10.0.0.188:6443
    mode tcp
    server 10.0.0.19 10.0.0.19:6443 check inter 3s fall 3 rise 5
    server 10.0.0.29 10.0.0.29:6443 check inter 3s fall 3 rise 5
    server 10.0.0.39 10.0.0.39:6443 check inter 3s fall 3 rise 5
    # 默认轮训即可. api请求都是http无状态的, 不需要源地址哈希, session共享或者session保持  
root@ha1-49:~# systemctl restart haproxy
root@ha1-49:~# ss -ntl
State               Recv-Q               Send-Q                              Local Address:Port                              Peer Address:Port               
LISTEN              0                    2000                                   10.0.0.188:6443                                   0.0.0.0:*                  
LISTEN              0                    128                                 127.0.0.53%lo:53                                     0.0.0.0:*                  
LISTEN              0                    128                                       0.0.0.0:22                                     0.0.0.0:*                  
LISTEN              0                    128                                          [::]:22                                        [::]:*  

本文仅配置一组haproxy+keepalived, 正常情况还要进行haproxy的高可用

3.3 docker安装

基于docker运行k8s时, 对docker版本有要求
官方会对k8s和docker的兼容版本进行测试验证, 并且在k8s各版本的CHANGELOG中发布

k8s-v1.17.16为例

image.png

image.png
本文部署:
docker-v19.03.14: master节点和node节点
k8s-v1.19.1 后续会演示如果升级到1.19.6
kubeadm 1.19.1-00
kubectl 1.19.1-00: 系统管理客户端命令行, 在管理节点安装即可
kubelet 1.19.1-00
kubeproxy: 是以容器方式, 自动运行, 无需手动安装

在master节点和node节点安装docker

ansible master_k8s:node_k8s:harbor -m script -a "docker_install.sh"
root@master-19:/data/scripts# vim docker_install.sh 

#!/bin/bash                                                                                                                                                  
apt update
apt -y install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
    $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

apt update

VERSION_STRING=5:19.03.14~3-0~ubuntu-bionic
apt -y install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING
root@master-19:/data/scripts# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2022-10-31 15:10:51 AEDT; 51s ago
...

3.4 harbor部署

基于kubeadmin部署k8s时, 会从docker hub下载很多镜像, 建议把这些镜像全部下载到本地, 修改tag后上传到本地的harbor. 这样后续部署k8s时, 就可以从本地的harbor拉取镜像加快速度, 同时也可以避免万一官方哪天不再提供该版本的镜像了, 就没法下载了

4. 部署kubeadm组件

kubeadm组件需要在master和node节点都安装. 其可用于在任意master节点初始化集群, 也用于把其余的master和所有node节点加入到k8s集群

master节点: 利用kubeadmin去初始化集群, 初始化集群在任意一个master执行即可
初始化完成后, 需要在其余的master节点执行kubeadm, 把自己加入到k8s集群

node节点: 把node节点加入到k8s集群, 也是用的kubeadm工具
kubelet是node节点必须安装的, master也需要安装, 因为master节点也需要启一些容器去运行
kubectl安装在管理节点上, 需要在哪个节点管理k8s, 就在哪个节点安装kubectl

4.1 master节点和node节点安装kubeadm

kubeadm, kubelet, kubectl这三个组件的版本要和k8s一致, 也就是说, 需要装哪个版本的k8s, 就要按照对应版本的组件

master节点安装kubeadm, kubelet, kubectl

#!/bin/bash                                                                                                                                                  

apt update
apt install -y apt-transport-https ca-certificates curl


curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg


echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | \
tee /etc/apt/sources.list.d/kubernetes.list

apt update
apt install -y kubelet=1.19.1-00 kubeadm=1.19.1-00 kubectl=1.19.1-00

node节点安装kubeadm, kubelet

#!/bin/bash                                                                                                                                                  

apt update
apt install -y apt-transport-https ca-certificates curl


curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg


echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | \
tee /etc/apt/sources.list.d/kubernetes.list

apt update
apt install -y kubelet=1.19.1-00 kubeadm=1.19.1-00

此时, 因为系统还没有生成一些认证文件, 所以kubelet会报错, 不过不影响, 之后创建了认证文件即可修复

image.png
image.png

4.2 kubeadm命令使用

kubeadm命令:

Usage:
  kubeadm [command]

Available Commands:
  alpha       Kubeadm experimental sub-commands # kubeadm处于测试阶段的命令
  completion  Output shell completion code for the specified shell (bash or zsh)
  config      Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  help        Help about any command
  init        Run this command in order to set up the Kubernetes control plane
  join        Run this on any machine you wish to join an existing cluster
  reset       Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'
  token       Manage bootstrap tokens
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     Print the version of kubeadm

Flags:
      --add-dir-header           If true, adds the file directory to the header of the log messages
  -h, --help                     help for kubeadm
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm [command] --help" for more information about a command.

  1. alpha: kubeadm处于测试阶段的命令
在alpha命令集中, 提供k8s证书管理命令
用kubeadm初始化的k8s, 内部默认的证书有效期为1年时间
一旦过了一年, 证书过期, k8s将无法再使用, 届时, 就要重新初始化证书, 就要利用到alpha命令集中的renew命令, renew会更新集群的所有证书
所以要定期检查证书有效期时间, 在一年到期前, 进行renew
check-expiration命令用来检查证书的有效时间
  1. completion: 用于kubeadm的bash命令补全, 需要安装bash-completion.

需要在所有master和node节点执行, 因为master和node节点都安装了kubeadm

mkdir /data/scripts -p
# kubeadm支持bash和zsh补全, 不过一般都是用bash
# 使用补全之前, 需要把补全的参和变量追加到一个脚本里
kubeadm completion bash > /data/scripts/kubeadm.sh # 脚本名字可以自定义
chmod a+x /data/scripts/kubeadm.sh
source /data/scripts/kubeadm.sh
echo 'source /data/scripts/kubeadm.sh' >> /etc/profile  # 开机自动生效
  1. config: 管理kubeadm集群的配置,该配置保留在集群的ConfigMap中
执行命令 kubeadm config print init-defaults 会生成一个配置文件
该配置文件包含一系列的参数, 利用这个配置文件, 可以用来初始化k8s集群
  1. help: Help about any command
kubeadm help
  1. init: 初始化集群
在任意一台master节点执行即可, 本案例在master1-10.0.0.19执行
  1. join: 将节点加⼊到已经存在的k8s master

  2. reset: 还原使用, 撤销kubeadm init或者kubeadm join对系统产⽣的环境变化. 一般用于物理机, 把kubeadm所做的操作全部还原

kubeadm init初始化只需要在一个master节点执行一次即可
kubeadm join需要在其与所有的master节点和node节点都执行
  1. token: 管理token
kubeadm init完成集群初始化后, 执行init的master节点, 会生成一个集群的token
后续将节点join到k8s集群时, 需要做一个认证, 该认证就是通过初始化集群阶段生成的token进行认证的
token默认的有效期是24小时, 所以join时必须确保token是有效的
集群中, 任意master节点生成的token都是有效的, 因为这个token会存放到etcd里. 如果失效了, 可以登录任意master节点重新生成
  1. upgrade: 升级k8s版本
k8s更新速度非常快, 所以一点当前版本功能无法满足现有需求, 就需要升级k8s版本
  1. version: 查看版本信息
root@master-19:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T11:24:31Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

4.3 kubeadm init集群初始化准备工作

在三台master中任意⼀台master 进⾏集群初始化,⽽且集群初始化只需要初始化⼀次. 这里在master-19-10.0.0.19进行初始化

4.3.1 kubeadm init命令

root@master-19:~# kubeadm init --help

****** --apiserver-advertise-address string   The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
# apiserver监听的地址, 如果没有设置, 那么就用默认的地址. 该地址可以不设置, 那么就用本地的网络. 
# 一般建议设置一个专门的监听地址, "x.x.x.x" string类型, 在哪个master节点进行的初始化, 就写哪个节点的ip地址, 这里就是10.0.0.19

****** --apiserver-bind-port int32            Port for the API Server to bind to. (default 6443)
# apiserver监听的端口号, 默认6443

# 以下四项是k8s证书信息, kubeadm初始化时会自动完成证书的配置, 所以如果不用外部的证书的话, 就无须配置
#--apiserver-cert-extra-sans stringSlice # 可选的证书额外信息,⽤于指定API Server的服
务器证书。可以是IP地址也可以是DNS名称。
--cert-dir string # 证书的存储路径,缺省路径为 /etc/kubernetes/pki
--certificate-key string # 定义⼀个⽤于加密kubeadm-certs Secret中的控制平台证书的密钥
--config string # kubeadm配置⽂件的路径

****** --control-plane-endpoint string # 为控制平台指定⼀个稳定的IP地址或DNS名称,即配置⼀个可以⻓期使⽤且是⾼可⽤的VIP或者域名,k8s 多master⾼可⽤基于此参数实现. 这个vip就是负载均衡的vip地址. 如果使用单master集群, 那么就无需这项配置

--cri-socket string # 要连接的CRI(容器运⾏时接⼝,Container Runtime Interface, 简称CRI)套接字的路径,如果为空,则kubeadm将尝试⾃动检测此值(使用docker的socket),"仅当安装了多个CRI或具有⾮标准CRI插槽时,才使⽤此选项". k8s-1.19.1还是使用的docker运行时, 所以此项无需配置, 会默认调用docker的socket, 从v1.24开始, 使用container.d作为运行时
--dry-run # 不要应⽤任何更改,只是输出将要执⾏的操作,其实就是测试运⾏
--experimental-kustomize string # ⽤于存储kustomize为静态pod清单所提供的补丁的路径
--feature-gates string # ⼀组⽤来描述各种功能特性的键值(key=value)对,选项是:IPv6DualStack=true|false (ALPHA - default=false)

******  --ignore-preflight-errors strings # 可以忽略检查过程中出现的错误信息,⽐如忽略swap,如果为all就忽略所有. 如果开启了swap, 那么这项一定要加上, 不过不建议开启swap分区
******  --image-repository string # 设置⼀个镜像仓库,默认为k8s.gcr.io
****** --kubernetes-version string # 指定安装k8s版本,默认为stable-1
****** --node-name string # 指定node节点名称, 系统会自动使用node节点的主机名称, 所以无需执行, 但是要保证每个node节点的主机名不同, master节点主机名也要不同
****** --pod-network-cidr # 设置pod ip地址范围, pod网络的大地址段, 每个pod会被分配一个来自该地址段的小子网. 这个大的地址段一定要大(比如, 10.200.0.0/16), 确保未来能满足所有pod的需求. 注意不要和其他的网段冲突. 
****** --service-cidr # 设置service⽹络地址范围, service的网络地址不要和pod网络段相同(比如, 10.100.0.0/16)
****** --service-dns-domain string # 设置k8s内部域名,默认为cluster.local,会由相应的DNS服务(kube-dns/coredns)解析⽣成域名记录。k8s中, 每创建一个服务, 都会给该服务起一个后缀名, 一般公司会用一个统计的后缀名, 比如project.online(生产环境), project.test(测试环境)

# ip地址, DNS, 域名一定要初始化阶段就规划好, 否则一旦集群运行起来, 再做修改很麻烦

--skip-certificate-key-print # 不打印⽤于加密的key信息. 这项不要加, 因为一定要打印加密key信息
--skip-phases strings # 要跳过哪些阶段. 这些不要加, 整个初始化过程都是必要的, 不要跳过
--skip-token-print # 跳过打印token信息. token不指定的话无法把节点加入集群, 所以不要指定
--token # 指定token, token可以单独执行. 如果不指定, 那么会根据默认的格式生成token. The format is [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
--token-ttl # 指定token过期时间,默认为24⼩时,0为永不过期. token一般24小时有效期就够了
--upload-certs #更新证书

#全局可选项:
--add-dir-header # 如果为true,在⽇志头部添加⽇志⽬录
--log-file string # 如果不为空,将使⽤此⽇志⽂件, 如果不指定, 那么使用系统默认的/var/log/syslog文件
--log-file-max-size uint # 设置⽇志⽂件的最⼤⼤⼩,单位为兆,默认为1800兆,0为没有限制
--rootfs # 宿主机的根路径,也就是绝对路径
--skip-headers # 如果为true,在log⽇志⾥⾯不显示标题前缀
--skip-log-headers # 如果为true,在log⽇志⾥⾥不显示标题

4.3.2 准备镜像

kubeadmin初始化k8s时, 会用到一些镜像, 可以通过kubeadm config images list命令查看
不同的k8s版本, 所需要的镜像是不同的

# 先通过kubeadm config images list获取镜像列表

root@master-19:~# kubeadm config images list --kubernetes-version v1.19.1 # 指定k8s版本
W1031 22:45:06.928798   28927 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.1 # 以容器运行
k8s.gcr.io/kube-controller-manager:v1.19.1 # 以容器运行
k8s.gcr.io/kube-scheduler:v1.19.1 # 以容器运行
k8s.gcr.io/kube-proxy:v1.19.1 # 以容器运行
k8s.gcr.io/pause:3.2 # 一个pod通常只启动一个容器, 但是也可以启动多个. 多个容器封装到一个pod中, 那么容器的网络也需要封装到pod中, 而pause容器就是实现网络封装的
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

运行在同一个pod中的不同的容器, 可以运行的是不同的rootfs, 比如一个是CentOS, 另一个是Ubuntu. 通过namespace, 将两个容器的rootfs进行隔离. 但是不同的容器使用的网络确是一样的, 不同的容器都是使用相同的socket, 每个容器都封装一个pause, 这样在不同的容器上, 都可以看到该网络内ip和端口使用情况. 比如, 在php这个容器内, 可以看到本网络的80端口被占用, 并且双方都可以通过127.0.0.1去相互访问. 这样就解决了同一个pod中, 不同容器的网络通信问题.

image.png

4.3.3 master节点拉取镜像

建议提前在master节点下载镜像以减少安装等待时间,避免后期因镜像下载异常⽽导致k8s部署异常

master节点需要全部七个镜像, 而node节点只需要pause和kube-proxy即可

root@master-19:/data/scripts# vim images_pull.sh

#!/bin/bash                                                                                                                                                  
  
docker pull k8s.gcr.io/kube-apiserver:v1.19.1
docker pull k8s.gcr.io/kube-controller-manager:v1.19.1
docker pull k8s.gcr.io/kube-scheduler:v1.19.1
docker pull k8s.gcr.io/kube-proxy:v1.19.1
docker pull k8s.gcr.io/pause:3.2
docker pull k8s.gcr.io/etcd:3.4.13-0
docker pull k8s.gcr.io/coredns:1.7.0
root@master-19:/data/scripts# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.19.1             33c60812eab8        2 years ago         118MB
k8s.gcr.io/kube-apiserver            v1.19.1             ce0df89806bb        2 years ago         119MB
k8s.gcr.io/kube-controller-manager   v1.19.1             538929063f23        2 years ago         111MB
k8s.gcr.io/kube-scheduler            v1.19.1             49eb8a235d05        2 years ago         45.7MB
k8s.gcr.io/etcd                      3.4.13-0            0369cf4303ff        2 years ago         253MB
k8s.gcr.io/coredns                   1.7.0               bfe3a36ebd25        2 years ago         45.2MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        2 years ago         683kB

5. 单master节点初始化

image.png

如果是测试环境、开发环境等⾮⽣产环境,可以使⽤单master节点,⽣产环境要使⽤多master节点,以保证k8s的⾼可⽤

5.1 在单master节点执行初始化命令

# apiserver监听地址: 10.0.0.19
# 监听端口: 6443
# k8s版本: v1.19.1
# service网段: 10.100.0.0/16
# pod网段: 10.200.0.0/16
root@master-19:~#  kubeadm init --apiserver-advertise-address=10.0.0.19 --apiserver-bind-port=6443 --kubernetes-version=v1.19.1 --pod-network-cidr=10.200.0.0/16 \
--service-cidr=10.100.0.0/16 --service-dns-domain=david.local --ignore-preflight-errors=swap
W1101 22:44:53.127197   13597 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.1
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.david.local master-19.k8s] and IPs [10.100.0.1 10.0.0.19]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master-19.k8s] and IPs [10.0.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master-19.k8s] and IPs [10.0.0.19 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.503222 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-19.k8s as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-19.k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: g3l8wx.9yaprg4pgdi1k2vy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!  # 初始化成功

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.19:6443 --token g3l8wx.9yaprg4pgdi1k2vy \
    --discovery-token-ca-cert-hash sha256:64924aa9fdfd8e6f47f9957bbf6b4b13751a21dfd28deb57dd72e802106aa719 

5.2 配置kubectl命令认证

kubectl可以用来管理k8s, 但是必须要先配置认证才能进行后续管理

认证文件: /etc/kubernetes/admin.conf, 包括k8s服务器地址, 以及证书信息, 千万不要泄漏

    server: https://10.0.0.19:6443   -  kubectl在管理k8s时, 会向这个api地址发起请求, 同时会携带这个文件里的认证信息. 
            如果该文件泄漏, 那么其他人也可通过这个认证文件去管理k8s

需要在哪个节点管理k8s, 就在哪个节点安装kubectl, 并且配置认证. 这里在master-19-10.0.0.19配置

To start using your cluster, you need to run the following as a regular user:

root@master-19:~# mkdir -p $HOME/.kube
root@master-19:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master-19:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置了kubectl认证后, 可以先通过kubectl get node验证认证成功

# 此时节点状态是NotReady, 是因为网络组件还没启动, 所以node节点是无法启动的
root@master-19:~# kubectl get node
NAME            STATUS     ROLES    AGE    VERSION
master-19.k8s   NotReady   master   103s   v1.19.1

5.3 部署网络组件

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
  • flannel: https://github.com/flannel-io/flannel/releases

官方提供了二进制包和yaml文件两种部署方式, 对于Kubernetes v1.17+以上版本, 可以使用yaml文件部署, 这里演示yaml文件部署, 在执行初始化的master节点部署flannel

  1. 先获取到yml文件
root@master-19:/data/scripts# mkdir yaml
root@master-19:/data/scripts# cd yaml/
root@master-19:/data/scripts/yaml# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  1. 修改kube-flannel.yml文件配置
# 修改pod地址段

  net-conf.json: |  # net-conf.json指定pod地址段
    {
    # "Network": "10.244.0.0/16", # 244为官方默认生成的pod地址段地址. 必须改成初始化时指定了pod地址段, 这里为200
      "Network": "10.200.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
# 修改镜像下载地址: 两个地址保持一致, 不要用rc版本

      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
       # 必须实现确保该地址是本地可以访问的, 因为会从该地址拉取镜像, 可以先手动docker pull测试一下, 如果失败, 就要换到其他的地址
       # 换到其他地址后, 可以docker pull到本地, 然后重新打上tag号, 上传到本地harbor. 之后, 修改该镜像地址为本地harbor地址, 这样之后就从本地harbor拉取该镜像了
       # 注意不要使用rc版本
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 

      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0

root@master-19:~# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
v1.1.0: Pulling from rancher/mirrored-flannelcni-flannel-cni-plugin
6097bfa160c1: Pull complete 
d10987c60bb3: Pull complete 
Digest: sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b
Status: Downloaded newer image for rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
root@master-19:~# docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
rancher/mirrored-flannelcni-flannel-cni-plugin   v1.1.0              fcecffc7ad4a        5 months ago        8.09MB
  1. 执行yml文件
root@master-19:/data/scripts/yaml# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

kubectl部署flannel时, 会先去~/.kube/config文件中, 查找apiserver的地址以及认证信息, 把yml文件上传到apiserver. 然后, apiserver再根据yml文件中的配置信息, 再去创建flannel的pod

  1. 验证pod和node节点启动
root@master-19:/data/scripts/yaml# kubectl get node 
NAME            STATUS   ROLES    AGE     VERSION
master-19.k8s   Ready    master   5m15s   v1.19.1

root@master-19:/data/scripts/yaml# kubectl get pod -A
NAMESPACE      NAME                                    READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-h9t89                   1/1     Running   0          56s
kube-system    coredns-f9fd979d6-2mmkj                 1/1     Running   0          5m7s
kube-system    coredns-f9fd979d6-j8vxt                 1/1     Running   0          5m7s
kube-system    etcd-master-19.k8s                      1/1     Running   0          5m23s
kube-system    kube-apiserver-master-19.k8s            1/1     Running   0          5m22s
kube-system    kube-controller-manager-master-19.k8s   1/1     Running   0          5m23s
kube-system    kube-proxy-fr4th                        1/1     Running   0          5m7s
kube-system    kube-scheduler-master-19.k8s            1/1     Running   0          5m23s

到此, master单节点就初始化成功了

5.4 添加其余node节点

如果忘记了taken或者ca证书公钥的哈希值, 可以在管理节点通过以下命令获取

# 获取token, 注意, 必须是有效的token
root@master-19:~# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
vt01rg.ykssroge3yadxi3a   23h         2022-11-03T12:46:01+11:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
# 获取ca证书公钥的哈希值
root@master-19:~# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
000259f02dc208616aa8118508c6c6aa6ad769187521a8fefb7910b9cb3f922c
# 在所有node节点执行

Then you can join any number of worker nodes by running the following on each as root:
# token默认24小时过期, 而证书的sha256哈希值是不会过期的
kubeadm join 10.0.0.19:6443 --token g3l8wx.9yaprg4pgdi1k2vy \
    --discovery-token-ca-cert-hash sha256:64924aa9fdfd8e6f47f9957bbf6b4b13751a21dfd28deb57dd72e802106aa719 
root@node1-79:~# kubeadm join 10.0.0.19:6443 --token g3l8wx.9yaprg4pgdi1k2vy     --discovery-token-ca-cert-hash sha256:64924aa9fdfd8e6f47f9957bbf6b4b13751a21dfd28deb57dd72e802106aa719
# node节点执行后, 也会进行初始化, 下载flannel, 等到全部初始化完毕, node节点状态也会变成ready, 可以在master节点查看

root@node1-79:~# docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
rancher/mirrored-flannelcni-flannel              v0.20.0             fd14f6e39753        2 weeks ago         59.4MB
rancher/mirrored-flannelcni-flannel-cni-plugin   v1.1.0              fcecffc7ad4a        5 months ago        8.09MB
k8s.gcr.io/kube-proxy                            v1.19.1             33c60812eab8        2 years ago         118MB
k8s.gcr.io/pause                                 3.2                 80d28bedfe5d        2 years ago         683kB
root@master-19:~# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
master-19.k8s   Ready    master   8m29s   v1.19.1
node1-79.k8s    Ready       82s     v1.19.1
node2-89.k8s    Ready       80s     v1.19.1
node3-99.k8s    Ready       78s     v1.19.1

到此, 一个单master节点, 3个node节点的集群就搭建完毕, 可以运行服务了

5.5 kubeadm reset还原

在master节点(10.0.0.19)和其余三个node节点执行reset, 恢复. 清空后, 进行多节点master的配置

root@master-19:~# kubeadm reset
root@master-19:~# rm -rf $HOME/.kube/config  # node节点如果没有配置认证, 那么可以不执行
root@node1-79:~# kubeadm reset
root@node1-79:~# rm -rf $HOME/.kube/config
root@node2-89:~# kubeadm reset 
root@node2-89:~# rm -rf $HOME/.kube/config
root@node3-99:~# kubeadm reset     
root@node3-99:~# rm -rf $HOME/.kube/config

6. 多节点master初始化

单master节点缺点:

  1. master故障后, 集群就无法使用了, 没有提供高可用
  2. 如果想在master节点上运行业务服务pod时, 还需要额外配置kubectl taint nodes --all node-role.kubernetes.io/master-, 否则在master节点上是无法运行除了管理端以外的pod的

你可能感兴趣的:(02. kubeadm部署k8s)