Install Kubernetes 1.24

本文是 王树森一口气完全解读 (可能是)全网最新版本 Kubernetes 1.24.1 集群部署的实践,特别感谢https://space.bilibili.com/479602299

Kubernetes容器运行时演进

早期的kubernetes runtime架构,远没这么复杂,kubelet创建容器,直接调用docker daemon,docker daemon自己调用libcontainer就把容器运行起来。

国际大厂们认为运行时标准不能被 Docker 一家公司控制,于是就串通搞了开放容器标准 OCI。忽悠Docker 把 libcontainer 封装了一下,变成 runC 捐献出来作为 OCI 的参考实现。

OCI(开放容器标准),规定了2点:

  • 容器镜像要长啥样,即 ImageSpec。里面的大致规定就是你这个东西需要是一个压缩了的文件夹,文件夹里以 xxx 结构放 xxx 文件;
  • 容器要需要能接收哪些指令,这些指令的行为是什么,即 RuntimeSpec。这里面的大致内容就是“容器”要能够执行 “create”,“start”,“stop”,“delete” 这些命令,并且行为要规范。

runC 参考实现,就是它能按照标准将符合标准的容器镜像运行起来,标准的好处就是方便搞创新,只要符合标准,生态圈里的其它工具都能和我一起工作(……当然 OCI 这个标准本身制定得不怎么样,真正工程上还是要做一些 adapter 的),那我的镜像就可以用任意的工具去构建,我的“容器”就不一定非要用 namespace 和 cgroups 来做隔离。这就让各种虚拟化容器可以更好地参与到容器实现当中。

再接下来 rkt(coreos推出的,类似docker) 想从 Docker 那边分一杯羹,希望 Kubernetes 原生支持 rkt 作为运行时,而且 PR 还真的合进去了。但是,整合出现的很多坑让Kubernetes疲于奔命。

然后,在Kubernetes 1.5 推出了 CRI 机制,即容器运行时接口(Container Runtime Interface),Kubernetes 告诉大家,你们想做 Runtime 可以啊,实现这个接口就成,成功反客为主。

不过 ,当时的 Kubernetes 尚未达到如今这般武林盟主的地位,容器运行时当然不能说我跟 Kubernetes 绑死了只提供 CRI 接口,于是就有了 shim(垫片)这个说法,一个 shim 的职责就是作为 Adapter 将各种容器运行时本身的接口适配到 Kubernetes 的 CRI 接口上,如下图中dockershim。

Install Kubernetes 1.24_第1张图片

这时,Docker 要搞 Swarm 进军 PaaS 市场,于是做了个架构切分,把容器操作都移动到一个单独的 Daemon 进程 containerd 中去,让 Docker Daemon 专门负责上层的封装编排。可惜 Swarm 在 Kubernetes 面前惨败。

之后,Docker 公司就把 containerd 项目捐给 CNCF 缩回去安心搞 Docker 企业版了。

Docker+containerd的runtime 实在是有点复杂了,于是Kubernetes就有了直接拿 containerd 做 oci-runtime 的方案。当然,除了 Kubernetes 之外,containerd 还要接诸如 Swarm 等调度系统,因此它不会去直接实现 CRI,这个适配工作当然就要交给一个 shim 了。

containerd 1.0 中,对 CRI 的适配通过一个单独的进程 CRI-containerd 来完成;

containerd 1.1 中做的又更漂亮一点,砍掉了 CRI-containerd 这个进程,直接把适配逻辑作为插件放进了 containerd 主进程中。

但在 containerd 做这些事情前,社区就已经有了一个更为专注的 cri-runtime:CRI-O,它非常纯粹,就是兼容 CRI 和 OCI,做一个 Kubernetes 专用的运行时:
Install Kubernetes 1.24_第2张图片

其中 conmon 就对应 containerd-shim,大体意图是一样的。

CRI-O 和(直接调用)containerd 的方案比起默认的 dockershim 确实简洁很多,但没啥生产环境的验证案例。直到不久前的1.24版本,Kubernetes终于不再原生支持Docker,以后的生产环境想必越来越多的containerd 的方案了。

Kubernetes 1.24 安装准备

概述

从上面的讲诉我们可以看到以下几种实现runtime的方式,其中kubelet直接调用Docker 管理器的方式现在1.24已经不支持了。

Install Kubernetes 1.24_第3张图片

  • 集群创建方式1:Containerd
    默认情况下,Kubernetes在创建集群的时候,使用的就是 Containerd方式。
  • 集群创建方式2:Docker
    Docker使用的普及率较高,虽然Kubernetes 1.24默认情况下废弃了kubelet对于Docker的支持,但是我们还可以借助于Mirantis维护的cri-dockerd插件方式来实现Kubernetes集群的创建。
  • 集群创建方式3:CRI-O
    CRI-O的方式是Kubernetes创建容器最直接的一种方式,在创建集群的时候,需要借助于cri-o插件的方式来实现Kubernetes集群的创建。

注意:后两种方式需要对Kubelet程序的启动参数进行改造

下面就这三种方式来分别实现:

我们使用Linux Ubuntu 20.04作为主机OS,首先设定好apt 源

#ali源
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse

#清华源

# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
apt update

前置条件

在Kubernetes官方文档中,我们可以找到对环境的要求

安装 kubeadm

Before you begin

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。主要是6443端口,如下命令检查是否开启
nc 127.0.0.1 6443 
  • 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。

  • 允许 iptables 检查桥接流量

    • 确保 br_netfilter模块被加载。这一操作可以通过运行 lsmod | grep br_netfilter来完成。若要显式加载该模块,可执行 sudo modprobe br_netfilter
    • 为了让你的 Linux 节点上的 iptables 能够正确地查看桥接流量,你需要确保在你的 sysctl 配置中将 net.bridge.bridge-nf-call-iptables 设置为 1。例如:
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    sudo sysctl --system
    

    检查br_netfilter

    root@cp:~# modinfo br_netfilter
    filename:       /lib/modules/5.4.0-113-generic/kernel/net/bridge/br_netfilter.ko
    description:    Linux ethernet netfilter firewall bridge
    author:         Bart De Schuymer <[email protected]>
    author:         Lennert Buytenhek <[email protected]>
    license:        GPL
    srcversion:     C662270B33245DF63170D07
    depends:        bridge
    retpoline:      Y
    intree:         Y
    name:           br_netfilter
    vermagic:       5.4.0-113-generic SMP mod_unload modversions 
    sig_id:         PKCS#7
    signer:         Build time autogenerated kernel key
    sig_key:        6E:D3:96:DC:0A:DB:28:8D:E2:D1:37:5C:EA:E7:55:AD:E5:7E:DD:AA
    sig_hashalgo:   sha512
    signature:      72:D5:E8:E3:90:FC:1D:A6:EC:C9:21:0D:37:81:F9:20:7C:C6:29:85:
                    C8:7A:61:17:1F:05:D1:2F:67:F7:0B:69:95:08:F1:71:0E:7C:3A:16:
                    58:C5:C6:03:08:BD:3C:F2:CE:6D:AC:FA:9A:CC:3B:53:0C:F6:C0:A1:
                    95:B3:B7:7F:2F:1C:CB:79:C0:3B:D0:8B:39:E6:1D:F0:94:EF:7F:0E:
                    2D:DA:03:0A:9D:4C:AB:83:DB:E2:DE:EC:60:60:26:A7:CC:4E:D6:5E:
                    74:10:22:E0:7E:13:23:AB:99:A0:A8:AB:66:87:5E:49:D9:B4:86:96:
                    BF:02:F4:3D:D2:01:AE:CA:34:5B:53:D1:76:41:1C:02:8C:BE:B3:DA:
                    D2:96:C3:15:01:76:25:71:81:44:C3:3E:1B:09:7E:F1:C5:3C:4F:9C:
                    FA:E3:90:BF:53:E1:B5:9B:1F:62:68:06:AA:16:03:48:38:54:6D:18:
                    72:2D:62:93:68:B3:4A:DC:6B:51:CE:E6:91:A1:19:12:43:0D:CF:87:
                    43:FC:5D:86:CD:FF:C3:9E:9C:FF:D2:8F:EE:00:87:2F:08:79:51:F8:
                    F3:F8:17:1C:86:52:E8:80:79:32:63:EC:3C:E2:AF:A5:F0:2B:BB:B2:
                    56:7F:0A:0E:98:0D:E4:DF:8A:96:A1:53:3C:AE:E6:7F:07:B3:21:3A:
                    22:78:2A:0D:C1:40:E7:CB:9A:9E:77:9C:71:4F:AC:8A:09:79:2A:05:
                    BD:1A:AD:92:0E:65:50:FD:2E:EC:9F:60:46:D5:15:21:BC:1C:51:FD:
                    EF:C9:CC:1C:AD:CD:49:49:C9:9C:B3:77:16:B3:A2:5D:BF:12:41:6F:
                    3C:95:FD:2D:3F:BF:A6:AD:E4:62:E6:E9:63:C2:C1:67:27:41:05:18:
                    46:CD:FA:99:5A:71:9A:9B:2D:6E:64:35:F6:67:1B:EA:D6:E4:17:A7:
                    7D:22:AB:A0:7A:E0:08:BB:76:B6:AF:1C:57:59:41:F3:AD:56:89:D7:
                    64:4A:B6:DD:76:6D:87:B1:CE:AD:1E:B2:C7:85:F0:85:80:79:0E:AE:
                    5A:DF:EE:6E:43:9E:49:0A:64:A3:11:5A:2E:F9:7B:B4:A7:A1:88:C8:
                    AC:FB:1B:2E:4B:1A:03:C8:42:31:9A:D1:4A:18:0F:FA:AA:D1:E4:79:
                    75:2A:23:6C:4C:B3:8B:5A:CA:C2:29:BC:81:A1:91:8D:FC:41:1A:C2:
                    AA:1F:2F:54:0D:D9:14:F1:CF:14:A8:44:CC:F5:4C:06:C8:DD:32:52:
                    4B:48:00:32:3E:41:6E:F7:3F:BE:5B:48:33:04:10:02:B0:68:20:F6:
                    2B:AD:08:6B:B8:D3:91:4A:A7:4D:79:F9
    

    软件部署

    更新软件源

    curl -s [https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg](https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg) | sudo apt-key add - 
    echo "deb [https://mirrors.aliyun.com/kubernetes/apt/](https://mirrors.aliyun.com/kubernetes/apt/) kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
    apt update
    
    

    查看kubeadm的版本

    root@cp:~# apt-cache madison kubeadm | head
       kubeadm |  1.24.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.24.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
       kubeadm |  1.23.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
    

    安装软件

    apt install -y kubeadm=1.24.1-00 kubelet=1.24.1-00 kubectl=1.24.1-00
    
    #最新版本可用省略版本号
    注意:
    会自动安装:
    conntrack cri-tools ebtables ethtool kubernetes-cni socat
    
    

    软件的的禁止更新

    apt-mark hold kubelet kubeadm kubectl
    注意:
    apt-mark 可以对软件包设置标记
    hold 标记指定软件包为保留(held back),阻止软件自动更新
    

    安装好了以后kubelet由于没有底层容器运行时,service fail

    root@cp:~# systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
         Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
        Drop-In: /etc/systemd/system/kubelet.service.d
                 └─10-kubeadm.conf
         Active: activating (auto-restart) (Result: exit-code) since Wed 2022-06-01 03:37:39 UTC; 1s ago
           Docs: https://kubernetes.io/docs/home/
        Process: 8935 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
       Main PID: 8935 (code=exited, status=1/FAILURE)
    

    自动生成一个配置文件:可改造

    root@cp:~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/default/kubelet
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    

    查看底层容器的支持

    root@cp:~# crictl images
    WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory" 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: no such file or directory" 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/crio/crio.sock: connect: no such file or directory" 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory" 
    FATA[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory"
    

    其中image connect using default endpoints:

    • unix:///var/run/dockershim.sock #1.24本身不再支持
    • unix:///run/containerd/containerd.sock#官方默认
    • unix:///run/crio/crio.sock
    • unix:///var/run/cri-dockerd.sock].

    此时建议做快照!

    Containerd方式创建集群

    容器运行时

    该方法需要:

    1. 安装Containerd
    2. 对Containerd进行配置
    3. 初始化集群并安装CNI

    安装Containerd

    在线安装方式

    安装基础软件:

    root@master:~# apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
    

    安装containerd.io

    
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg |sudo apt-key add -
    add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    apt update
    apt install-y containerd.io
    
    

    查看安装containerd.io的结果

    root@worker01:~# dpkg -L containerd.io
    /.
    /etc
    /etc/containerd
    /etc/containerd/config.toml
    /lib
    /lib/systemd
    /lib/systemd/system
    /lib/systemd/system/containerd.service
    /usr
    /usr/bin
    /usr/bin/containerd
    /usr/bin/containerd-shim
    /usr/bin/containerd-shim-runc-v1
    /usr/bin/containerd-shim-runc-v2
    **/usr/bin/ctr
    /usr/bin/runc**
    /usr/share
    /usr/share/doc
    /usr/share/doc/containerd.io
    /usr/share/doc/containerd.io/changelog.Debian.gz
    /usr/share/doc/containerd.io/copyright
    /usr/share/man
    /usr/share/man/man5
    /usr/share/man/man5/containerd-config.toml.5.gz
    /usr/share/man/man8
    /usr/share/man/man8/containerd-config.8.gz
    /usr/share/man/man8/containerd.8.gz
    /usr/share/man/man8/ctr.8.gz
    

    Containerd的默认配置文件/etc/containerd/config.toml

    root@worker01:~# cat /etc/containerd/config.toml 
    #   Copyright 2018-2022 Docker Inc.
    
    #   Licensed under the Apache License, Version 2.0 (the "License");
    #   you may not use this file except in compliance with the License.
    #   You may obtain a copy of the License at
    
    #       http://www.apache.org/licenses/LICENSE-2.0
    
    #   Unless required by applicable law or agreed to in writing, software
    #   distributed under the License is distributed on an "AS IS" BASIS,
    #   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    #   See the License for the specific language governing permissions and
    #   limitations under the License.
    
    disabled_plugins = ["cri"]
    
    #root = "/var/lib/containerd"
    #state = "/run/containerd"
    #subreaper = true
    #oom_score = 0
    
    #[grpc]
    #  address = "/run/containerd/containerd.sock"
    #  uid = 0
    #  gid = 0
    
    #[debug]
    #  address = "/run/containerd/debug.sock"
    #  uid = 0
    #  gid = 0
    #  level = "info"
    

    可以看到配置都被注释,默认的containerd的配置可以通过containerd config default查看:

    root@worker01:~# containerd config default
    disabled_plugins = []
    imports = []
    oom_score = 0
    plugin_dir = ""
    required_plugins = []
    root = "/var/lib/containerd"
    state = "/run/containerd"
    temp = ""
    version = 2
    
    [cgroup]
      path = ""
    
    [debug]
      address = ""
      format = ""
      gid = 0
      level = ""
      uid = 0
    
    ......
    [ttrpc]
      address = ""
      gid = 0
      uid = 0
    

    为了修改kubernetes的源,接下来生成containerd的配置并修改(也可以修改kubelet的配置)

    #定制配置
    mkdir -p /etc/containerd
    containerd config default | tee /etc/containerd/config.toml
    

    直接修改:/etc/containerd/config.toml

        sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    ...
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                ...
                SystemdCgroup = true
    
    #注意修改SystemdCgroup
    
    #重启containerd
    systemctl restart containerd
    

    Containerd提供了ctr命令来进行操作,如

    root@cp:~# ctr image ls
    REF TYPE DIGEST SIZE PLATFORMS LABELS
    

    而kubernetes对ctr命令进行了封装,得到crictl命令(ctr tools)

    root@cp:~# crictl images list
    WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
    ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory" 
    E0601 05:48:16.809106   30961 remote_image.go:121] "ListImages with filter from image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" filter="&ImageFilter{Image:&ImageSpec{Image:list,Annotations:map[string]string{},},}"
    FATA[0000] listing images: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService
    

    这是由于没有运行时的入口,我们可以修改crictl配置文件,获得containerd的sock信息

    #cat /etc/crictl.yaml
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    
    

    重启服务

    systemctl restart containerd
    systenctl enable containerd
    
    root@cp:~# systemctl status containerd
    ● containerd.service - containerd container runtime
         Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-06-01 05:52:04 UTC; 29s ago
           Docs: https://containerd.io
       Main PID: 31549 (containerd)
          Tasks: 10
         Memory: 19.4M
         CGroup: /system.slice/containerd.service
                 └─31549 /usr/bin/containerd
    
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079492081Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.contai>
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079513258Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no>
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079524291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.i>
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079559919Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry e>
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079597162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.079791605Z" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin con>
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.080004161Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.080057668Z" level=info msg=serving... address=/run/containerd/containerd.sock
    Jun 01 05:52:04 cp systemd[1]: Started containerd container runtime.
    Jun 01 05:52:04 cp containerd[31549]: time="2022-06-01T05:52:04.081450178Z" level=info msg="containerd successfully booted in 0.030415s"
    

    离线安装方式

    需要下载runc和containerd

    https://github.com/opencontainers/runc

Releases · containerd/containerd

wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64
cp runc.amd64 /usr/local/bin/runc   #把runc直接拷贝即可
chmod +x  /usr/local/bin/runc
cp /usr/local/bin/runc /usr/bin
cp /usr/local/bin/runc /usr/local/sbin/

wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz
tar xf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /

root@worker02:~# containerd --version
containerd github.com/containerd/containerd v1.6.4 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16

通过装好的node-cp查看containerd的服务

root@cp:~# systemctl status containerd
● containerd.service - containerd container runtime
     Loaded: loaded **(/lib/systemd/system/containerd.service**; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-06-01 06:19:25 UTC; 3h 5min ago
       Docs: https://containerd.io
   Main PID: 36533 (containerd)

root@cp:~# cat /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

而离线安装的在/etc/systemd/system/containerd.service,也是通过systemctl status containerd查看

root@worker02:~# cat /etc/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

启动服务

root@worker02:~# systemctl daemon-reload
root@worker02:~# systemctl restart containerd

创建配置目录

mkdir -p /etc/containerd

从装好的node传递配置文件/etc/containerd/config.toml和/etc/crictl.yaml,也可以按照前面的方法生成后修改

root@cp:~# scp /etc/containerd/config.toml root@worker02:/etc/containerd/config.toml 

root@cp:~# scp /etc/crictl.yaml root@worker02:/etc/crictl.yaml

systemctl restart containerd

Kubeadm初始化集群

查看需要多少images(报错是不能访问k8s.gcr.io)

root@cp:~# kubeadm config images list
W0601 06:40:29.809756   39745 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0601 06:40:29.809867   39745 version.go:104] falling back to the local client version: v1.24.1
k8s.gcr.io/kube-apiserver:v1.24.1
k8s.gcr.io/kube-controller-manager:v1.24.1
k8s.gcr.io/kube-scheduler:v1.24.1
k8s.gcr.io/kube-proxy:v1.24.1
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/coredns/coredns:v1.8.6

使用kubeadm命令初始化集群

kubeadm init --kubernetes-version=1.24.1  --apiserver-advertise-address=192.168.81.21 --apiserver-bind-port=6443 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.211.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///run/containerd/containerd.sock --ignore-preflight-errors=Swap

可以用kubeadm init —help来查看语法。

root@cp:~# kubeadm init --kubernetes-version=1.24.1  --apiserver-advertise-address=192.168.81.21 --apiserver-bind-port=6443 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.211.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket=unix:///run/containerd/containerd.sock --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.81.21:6443 --token fybv6g.xlt3snl52qs5wyoo \
        --discovery-token-ca-cert-hash sha256:8545518e775368c0982638b9661355e6682a1f3ba98386b4ca0453449edc97ca

已经下好的images

root@cp:~# crictl images ls
IMAGE                                                                         TAG                 IMAGE ID            SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   v1.8.6              a4ca41631cc7a       13.6MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.5.3-0             aebe758cef4cd       102MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.24.1             e9f4b425f9192       33.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.24.1             b4ea7e648530d       31MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.24.1             beb86f5d8e6cd       39.5MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.24.1             18688a72645c5       15.5MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.6                 6270bb605e12e       302kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB

#如果用ctr命令需要指定namespace

root@cp:~# ctr namespace ls
NAME    LABELS 
default        
k8s.io         

root@cp:~# ctr -n k8s.io image ls
REF                                                                                                                                                 TYPE                                                      DIGEST                                                                  SIZE      PLATFORMS                                                                    LABELS                          
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6                                                                                  application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e                 application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0                                                                                    application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5                    application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.1                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver@sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44          application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1                                                                 application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager@sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.1                                                                              application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d              application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.1                                                                          application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler@sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9          application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6                                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7                                                                                       application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db                   application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c                   application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:0d2de567157e3fb97dfa831620a3dc38d24b05bd3721763a99f3f73b8cbe99c9 14.8 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:bb6ed397957e9ca7c65ada0db5c5d1c707c9c8afc80a94acbe69f3ae76988f0c 304.0 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db 294.7 KiB linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e 13.0 MiB  linux/amd64,linux/arm,linux/arm64,linux/mips64le,linux/ppc64le,linux/s390x   io.cri-containerd.image=managed 
sha256:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5 97.4 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x,windows/amd64 io.cri-containerd.image=managed 
sha256:b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:594a3f5bbdd0419ac57d580da8dfb061237fa48d0c9909991a3af70630291f7a 29.6 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:1652df3138207570f52ae0be05cbf26c02648e6a4c30ced3f779fe3d6295ad6d 37.7 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed 
sha256:e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693                                                                             application/vnd.docker.distribution.manifest.list.v2+json sha256:ad9608e8a9d758f966b6ca6795b50a4723982328194bde214804b21efd48da44 32.2 MiB  linux/amd64,linux/arm/v7,linux/arm64,linux/ppc64le,linux/s390x               io.cri-containerd.image=managed

使用calico作为CNI

root@cp:~# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created

Worker节点加入

root@worker01:~# kubeadm join 192.168.81.21:6443 --token fybv6g.xlt3snl52qs5wyoo \
>         --discovery-token-ca-cert-hash sha256:8545518e775368c0982638b9661355e6682a1f3ba98386b4ca0453449edc97ca 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#CP check:
root@cp:/home/zyi# kubectl get node -owide
NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
cp         Ready    control-plane   30h   v1.24.1   192.168.81.21   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
worker01   Ready    <none>          30h   v1.24.1   192.168.81.22   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
worker02   Ready    <none>          27h   v1.24.1   192.168.81.23   <none>        Ubuntu 20.04.4 LTS   5.4.0-113-generic   containerd://1.6.4
root@cp:~# kubectl get po -A -owide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP              NODE       NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-56cdb7c587-v46wk   1/1     Running   0          118m    10.211.5.3      worker01   <none>           <none>
kube-system   calico-node-2qq4n                          1/1     Running   0          118m    192.168.81.21   cp         <none>           <none>
kube-system   calico-node-slnp9                          1/1     Running   0          2m27s   192.168.81.23   worker02   <none>           <none>
kube-system   calico-node-v2xd8                          1/1     Running   0          118m    192.168.81.22   worker01   <none>           <none>
kube-system   coredns-7f74c56694-4b4wp                   1/1     Running   0          3h      10.211.5.1      worker01   <none>           <none>
kube-system   coredns-7f74c56694-mmvgb                   1/1     Running   0          3h      10.211.5.2      worker01   <none>           <none>
kube-system   etcd-cp                                    1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-apiserver-cp                          1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-controller-manager-cp                 1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-proxy-4n2jk                           1/1     Running   0          2m27s   192.168.81.23   worker02   <none>           <none>
kube-system   kube-proxy-8zdvt                           1/1     Running   0          169m    192.168.81.22   worker01   <none>           <none>
kube-system   kube-proxy-rpf78                           1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>
kube-system   kube-scheduler-cp                          1/1     Running   0          3h      192.168.81.21   cp         <none>           <none>

使用Docker运行时创建集群

安装官网的描述可以使用Docker Engine创建集群

Note: 以下操作假设你使用 [cri-dockerd](https://github.com/Mirantis/cri-dockerd) 适配器来将 Docker Engine 与 Kubernetes 集成。

  1. 在你的每个节点上,遵循安装 Docker 引擎指南为你的 Linux 发行版安装 Docker

  2. 按照源代码仓库中的说明安装 [cri-dockerd](https://github.com/Mirantis/cri-dockerd)

    对于 cri-dockerd,默认情况下,CRI 套接字是 /run/cri-dockerd.sock

  3. 初始化kubernetes Cluster

安装Docker-ce

在线安装方式,配置软件源等

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt update
apt install -y containerd.io docker-ce docker-ce-cli

注意:
默认情况下,docker服务使用的就是containerd接口服务,
通过 journalctl -u docker.service
可知:unix:///run/containerd/containerd.sock

查看Docker info

root@worker01:~# docker info
Client:
 ...

Server:
 Containers: 0
  ...
 Server Version: 20.10.16
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 **Cgroup Driver: cgroupfs**
 Cgroup Version: 1
...
WARNING: No swap limit support

上面可以看到docker的default Cgroup Driver是cgroupfs,而查看kubelet却是systemd

root@worker01:~# journalctl -u kubelet | grep systemd |more
Jun 03 02:48:41 worker01 systemd[1]: kubelet.service: Main process exited
, code=exited, status=1/FAILURE
Jun 03 02:48:41 worker01 systemd[1]: kubelet.service: Failed with result 
'exit-code'.
Jun 03 02:48:52 worker01 systemd[1]: kubelet.service: Scheduled restart j
ob, restart counter is at 4441.
Jun 03 02:48:52 worker01 systemd[1]: Stopped kubelet: The Kubernetes Node
 Agent.
Jun 03 02:48:52 worker01 systemd[1]: Started kubelet: The Kubernetes Node
 Agent.
Jun 03 02:48:52 worker01 kubelet[97187]:       --cgroup-driver string    
                                 Driver that the kubelet uses to manipula
te cgroups on the host.  Possible values: 'cgroupfs', 'systemd' (default 
"cgroupfs") (DEPRECATED: This parameter should be set via the config file
 specified by the Kubelet's --config flag. See https://kubernetes.io/docs
/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jun 03 02:48:52 worker01 systemd[1]: kubelet.service: Main process exited
, code=exited, status=1/FAILURE

下面来改docker的Cgroup Dirver的参数(每台都做):

创建专属的systemd服务管理目录

mkdir -p /etc/systemd/system/docker.service.d

定制配置文件

tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
},
  "storage-driver": "overlay2"
}
EOF

# 重新启动服务
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

root@worker01:~# docker info |grep Cgroup
 Cgroup Driver: systemd
 Cgroup Version: 1
WARNING: No swap limit support

获取cri-dockers插件以支持docker

插件cri-dockerd安装方式

mkdir -p /data/softs && cd /data/softs

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.1/cri-dockerd-0.2.1.amd64.tgz
tar xf cri-dockerd-0.2.1.amd64.tgz
mv cri-dockerd/cri-dockerd /usr/local/bin/

#检查效果
cri-dockerd --version

root@master:/data/softs# cri-dockerd --version
cri-dockerd 0.2.1 (HEAD)

定制服务文件/etc/systemd/system/cri-docker.service,为cri-dockerd启动读取

[Unit]
Description=CRI Interface for Docker Application container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --image-pull-progress-deadline=30s --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 --cri-dockerd-root-directory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s 
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
killMode=process

[Install]
WantedBy=multi-user.target

定制专属的服务(Optional)socket文件/usr/lib/systemd/system/cri-docker.socket

[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

启动服务

systemctl daemon-reload
systemctl enable cri-docker.service
systemctl restart cri-docker.service

systemctl status --no-pager cri-docker.service

#检测效果
crictl --runtime-endpoint /var/run/cri-dockerd.sock ps 

root@master:/data/softs# crictl --runtime-endpoint /var/run/cri-dockerd.sock ps 
I0604 10:50:12.902161  380647 util_unix.go:104] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/cri-dockerd.sock" fullURLFormat="unix:///var/run/cri-dockerd.sock"
I0604 10:50:12.911201  380647 util_unix.go:104] "Using this format as endpoint is deprecated, please consider using full url format." deprecatedFormat="/var/run/cri-dockerd.sock" fullURLFormat="unix:///var/run/cri-dockerd.sock"
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

此时有go:104提示,以下yaml文件可以去掉前面的提示

# cat /etc/crictl.yaml
runtime-endpoint: "unix:///var/run/cri-dockerd.sock"
image-endpoint: "unix:///var/run/cri-dockerd.sock"
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false

测试效果

crictl ps

root@master:/data/softs# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

接下来,保证所有主机得到配置文件

root@master:/data/softs# scp /etc/systemd/system/cri-docker.service worker01:/etc/systemd/system/cri-docker.service
cri-docker.service                                                               100%  934     1.9MB/s   00:00    
root@master:/data/softs# scp /usr/lib/systemd/system/cri-docker.socket worker01:/usr/lib/systemd/system/cri-docker.socket
cri-docker.socket                                                                100%  210   458.5KB/s   00:00    
root@master:/data/softs# scp /etc/crictl.yaml worker01:/etc/crictl.yaml
crictl.yaml                                                                      100%  183   718.5KB/s   00:00

创建集群

kubelet改造,所有node

#cat/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
ExecStart=...--pod-infra-container-image=registry.cn-[hangzhou.aliyuncs.com/google_containers/pause:3.7 --](http://hangzhou.aliyuncs.com/google_containers/pause:3.7--)container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --containerd=unix:///var/run/cri-dockerd.sock

systemctl daemon-reload
systemctl restart kubelet

kubeadm集群初始化

kubeadm init --kubernetes-version=1.24.1 \
--apiserver-advertise-address=192.168.81.20 \
--image-repository registry.cn-h[angzhou.aliyuncs.com/google_containers](http://angzhou.aliyuncs.com/google_containers) \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.211.0.0/16 \
--cri-socket unix:///var/run/cri-dockerd.sock \
--ignore-preflight-errors=Swap

root@master:/data/softs# kubeadm init --kubernetes-version=1.24.1 \
> --apiserver-advertise-address=192.168.81.20 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.211.0.0/16 \
> --cri-socket unix:///var/run/cri-dockerd.sock \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.81.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.81.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.81.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.002541 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: zpqirm.so0xmeo6b46gaj41
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41 \
        --discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47 
root@master:/data/softs# mkdir -p $HOME/.kube
root@master:/data/softs#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master:/data/softs#   sudo chown $(id -u):$(id -g) $HOME/.kube/config

在worker节点

root@worker01:/data/softs# kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41 \
>         --discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47 
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

这是因为底层的默认cri是containerd

执行kubeadm join加入CRI

root@worker01:/data/softs# kubeadm join 192.168.81.20:6443 --token zpqirm.so0xmeo6b46gaj41         --discovery-token-ca-cert-hash sha256:e8469d13b8ff07ce2803134048bb109a16e6b15b9e3279c4c556066549025c47  --cri-socket unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

CRI-O运行时创建集群

官网的文档

容器运行时

安装CRI-O

OS=xUbuntu_20.04
CRIO_VERSION=1.24
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list

curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS/Release.key | sudo apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | sudo apt-key add -

#echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
#echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list

#mkdir -p /usr/share/keyrings
#curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg
#curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg

apt-get update
apt-get install cri-o cri-o-runc

systemctl start crio
systemctl enable crio
systemctl status crio

修改配置

修改默认的网段

#/etc/cni/net.d/100-crio-bridge.conf
sed -i 's/10.85.0.0/10.211.0.0/g' /etc/cni/net.d/100-crio-bridge.conf

修改基本的配置

#grep -Env '#|^$|^\['/etc/crio/crio.conf
169:cgroup_manager = "systemd"
451:pause_image ="[registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6](http://registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6)"

重启服务配置

systemctl restart crio

验证效果

curl -v --unix-socket /var/run/crio/crio.sock [http://localhost/info](http://localhost/info)

root@first:~# curl -v --unix-socket /var/run/crio/crio.sock http://localhost/info
*   Trying /var/run/crio/crio.sock:0...
* Connected to localhost (/var/run/crio/crio.sock) port 80 (#0)
> GET /info HTTP/1.1
> Host: localhost
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/json
< Date: Sat, 04 Jun 2022 15:10:27 GMT
< Content-Length: 239
< 
* Connection #0 to host localhost left intact
{"storage_driver":"overlay","storage_root":"/var/lib/containers/storage","cgroup_driver":"systemd","default_id_mappings":{"uids":[{"container_id":0,"host_id":0,"size":4294967295}],"gids":[{"container_id":0,"host_id":0,"size":4294967295}]}}

配置crictl.yaml参数

# cat /etc/crictl.yaml
runtime-endpoint: "unix:///var/run/crio/crio.sock"
image-endpoint: "unix:///var/run/crio/crio.sock"
timeout: 10
debug: false
pull-image-on-create: true
disable-pull-on-run: false

初始化集群

修改kubelet参数

#cat/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
ExecStart=...--container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m

systemctl daemon-reload
systemctl restart kubelet

集群初始化

kubeadm init --kubernetes-version=1.24.1 \
--apiserver-advertise-address=192.168.81.1 \
--image-repository=registry.cn-[hangzhou.aliyuncs.com/google_containers \](http://hangzhou.aliyuncs.com/google_containers%5C)
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.211.0.0/16 \
--cri-socket unix:///var/run/crio/crio.sock \
--ignore-preflight-errors=Swap

root@main:~# kubeadm init --kubernetes-version=1.24.1 \
> --apiserver-advertise-address=192.168.81.1 \
> --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
> --service-cidr=10.96.0.0/12 \
> --pod-network-cidr=10.211.0.0/16 \
> --cri-socket unix:///var/run/crio/crio.sock \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local main] and IPs [10.96.0.1 192.168.81.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost main] and IPs [192.168.81.1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost main] and IPs [192.168.81.1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.003679 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node main as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node main as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: k9dmq6.cuhj0atd4jhz4y6o
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.81.1:6443 --token k9dmq6.cuhj0atd4jhz4y6o \
        --discovery-token-ca-cert-hash sha256:99de4906a2f690147d59ee71c1e2e916e64b6a8f6efae5bd28bebcb711cd28ab

Worker 加入集群

root@worker03:~# kubeadm join 192.168.81.1:6443 --token k9dmq6.cuhj0atd4jhz4y6o \
>         --discovery-token-ca-cert-hash sha256:99de4906a2f690147d59ee71c1e2e916e64b6a8f6efae5bd28bebcb711cd28ab 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

你可能感兴趣的:(Kubernetes,kubernetes,容器)