1.1 安装kubeadm环境准备
1.1.1 环境需求
环境:centos 7.4 +
硬件需求:CPU>=2c ,内存>=2G
1.1.2 环境机器列表
ip | role | software | 备注 |
---|---|---|---|
192.168.165.198 | k8s-master | kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet | |
192.168.165.192 | k8s-node2 | kubelet kube-proxy docker flannel | |
192.168.165.193 | k8s-node3 | kubelet kube-proxy docker flannel | |
192.168.165.194 | k8s-node4 | kubelet kube-proxy docker flannel |
1.1.3 环境初始化
- 关闭防火墙及selinux
注意:所有节点都要执行
[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld
[root@localhost ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
setenforce: SELinux is disabled
- 关闭swap分区
注意:所有节点都要执行
[root@localhost ~]# swapoff -a
[root@localhost ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- 分别在各个节点上设置主机名及配置hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.165.198 k8s-master
192.168.165.192 k8s-node2
192.168.165.193 k8s-node3
192.168.165.194 k8s-node4
- 内核调整,将桥接的IPv4流量传递到iptables的链
[root@localhost ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@localhost ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
* Applying /etc/sysctl.conf ...
- 设置系统时区并同步时间服务器
[root@localhost ~]# yum install -y ntpdate
已加载插件:fastestmirror
Determining fastest mirrors
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base
extras
updates
(1/4): base/7/x86_64/group_gz
(2/4): extras/7/x86_64/primary_db
(3/4): base/7/x86_64/primary_db
(4/4): updates/7/x86_64/primary_db
正在解决依赖关系
--> 正在检查事务
---> 软件包 ntpdate.x86_64.0.4.2.6p5-29.el7.centos.2 将被 安装
--> 解决依赖关系完成
依赖关系解决
============================================================================================================================
Package 架构 版
============================================================================================================================
正在安装:
ntpdate x86_64 4.
事务概要
============================================================================================================================
安装 1 软件包
总下载量:87 k
安装大小:121 k
Downloading packages:
警告:/var/cache/yum/x86_64/7/base/packages/ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID f
ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm 的公钥尚未安装
ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm
从 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 检索密钥
导入 GPG key 0xF4A80EB5:
用户ID : "CentOS-7 Key (CentOS 7 Official Signing Key) "
指纹 : 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
软件包 : centos-release-7-6.1810.2.el7.centos.x86_64 (@anaconda)
来自 : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : ntpdate-4.2.6p5-29.el7.centos.2.x86_64
验证中 : ntpdate-4.2.6p5-29.el7.centos.2.x86_64
已安装:
ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2
完毕!
[root@localhost ~]# ntpdate time.windows.com
4 Aug 17:13:02 ntpdate[17303]: adjust time server 20.189.79.72 offset -0.018538 sec
1.1.4 docker安装
如果没安装wget,需要安装yum install -y wget
[root@localhost ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
--2021-08-04 17:21:14-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 113.96.109.93, 59.53.162.242, 124.225.134.243, ...
正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|113.96.109.93|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:2081 (2.0K) [application/octet-stream]
正在保存至: “/etc/yum.repos.d/docker-ce.repo”
100%[=======================================================================================================================
2021-08-04 17:21:15 (154 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo” [2081/2081])
[root@localhost ~]# yum -y install docker-ce-18.06.1.ce-3.el7
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
docker-ce-stable
(1/2): docker-ce-stable/7/x86_64/primary_db
(2/2): docker-ce-stable/7/x86_64/updateinfo
正在解决依赖关系
--> 正在检查事务
---> 软件包 docker-ce.x86_64.0.18.06.1.ce-3.el7 将被 安装
--> 正在处理依赖关系 container-selinux >= 2.9,它被软件包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
--> 正在处理依赖关系 libcgroup,它被软件包 docker-ce-18.06.1.ce-3.el7.x86_64 需要
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.119.2-1.911c772.el7_8 将被 安装
--> 正在处理依赖关系 policycoreutils-python,它被软件包 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 需要
---> 软件包 libcgroup.x86_64.0.0.41-21.el7 将被 安装
--> 正在检查事务
---> 软件包 policycoreutils-python.x86_64.0.2.5-34.el7 将被 安装
--> 正在处理依赖关系 policycoreutils = 2.5-34.el7,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 setools-libs >= 3.3.8-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libsemanage-python >= 2.5-14,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 audit-libs-python >= 2.1.3-4,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 python-IPy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.4)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1(VERS_1.2)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4(VERS_4.0)(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 checkpolicy,它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libqpol.so.1()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在处理依赖关系 libapol.so.4()(64bit),它被软件包 policycoreutils-python-2.5-34.el7.x86_64 需要
--> 正在检查事务
---> 软件包 audit-libs-python.x86_64.0.2.8.5-4.el7 将被 安装
--> 正在处理依赖关系 audit-libs(x86-64) = 2.8.5-4.el7,它被软件包 audit-libs-python-2.8.5-4.el7.x86_64 需要
---> 软件包 checkpolicy.x86_64.0.2.5-8.el7 将被 安装
---> 软件包 libsemanage-python.x86_64.0.2.5-14.el7 将被 安装
---> 软件包 policycoreutils.x86_64.0.2.5-29.el7 将被 升级
---> 软件包 policycoreutils.x86_64.0.2.5-34.el7 将被 更新
---> 软件包 python-IPy.noarch.0.0.75-6.el7 将被 安装
---> 软件包 setools-libs.x86_64.0.3.3.8-4.el7 将被 安装
--> 正在检查事务
---> 软件包 audit-libs.x86_64.0.2.8.4-4.el7 将被 升级
--> 正在处理依赖关系 audit-libs(x86-64) = 2.8.4-4.el7,它被软件包 audit-2.8.4-4.el7.x86_64 需要
---> 软件包 audit-libs.x86_64.0.2.8.5-4.el7 将被 更新
--> 正在检查事务
---> 软件包 audit.x86_64.0.2.8.4-4.el7 将被 升级
---> 软件包 audit.x86_64.0.2.8.5-4.el7 将被 更新
--> 解决依赖关系完成
依赖关系解决
============================================================================================================================
Package 架构 版本
============================================================================================================================
正在安装:
docker-ce x86_64 18.
为依赖而安装:
audit-libs-python x86_64 2.8
checkpolicy x86_64 2.5
container-selinux noarch 2:2
libcgroup x86_64 0.4
libsemanage-python x86_64 2.5
policycoreutils-python x86_64 2.5
python-IPy noarch 0.7
setools-libs x86_64 3.3
为依赖而更新:
audit x86_64 2.8
audit-libs x86_64 2.8
policycoreutils x86_64 2.5
事务概要
============================================================================================================================
安装 1 软件包 (+8 依赖软件包)
升级 ( 3 依赖软件包)
总下载量:44 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/12): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
(2/12): audit-libs-2.8.5-4.el7.x86_64.rpm
(3/12): audit-libs-python-2.8.5-4.el7.x86_64.rpm
(4/12): audit-2.8.5-4.el7.x86_64.rpm
(5/12): libcgroup-0.41-21.el7.x86_64.rpm
(6/12): libsemanage-python-2.5-14.el7.x86_64.rpm
(7/12): python-IPy-0.75-6.el7.noarch.rpm
(8/12): policycoreutils-python-2.5-34.el7.x86_64.rpm
(9/12): checkpolicy-2.5-8.el7.x86_64.rpm
(10/12): setools-libs-3.3.8-4.el7.x86_64.rpm
(11/12): policycoreutils-2.5-34.el7.x86_64.rpm
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm: Header V4 RSA/SHA512 Signa
docker-ce-18.06.1.ce-3.el7.x86_64.rpm 的公钥尚未安装
(12/12): docker-ce-18.06.1.ce-3.el7.x86_64.rpm
----------------------------------------------------------------------------------------------------------------------------
总计
从 https://mirrors.aliyun.com/docker-ce/linux/centos/gpg 检索密钥
导入 GPG key 0x621E9F35:
用户ID : "Docker Release (CE rpm) "
指纹 : 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
来自 : https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : audit-libs-2.8.5-4.el7.x86_64
正在更新 : policycoreutils-2.5-34.el7.x86_64
正在安装 : libcgroup-0.41-21.el7.x86_64
正在安装 : audit-libs-python-2.8.5-4.el7.x86_64
正在安装 : setools-libs-3.3.8-4.el7.x86_64
正在安装 : checkpolicy-2.5-8.el7.x86_64
正在安装 : python-IPy-0.75-6.el7.noarch
正在安装 : libsemanage-python-2.5-14.el7.x86_64
正在安装 : policycoreutils-python-2.5-34.el7.x86_64
正在安装 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch
setsebool: SELinux is disabled.
正在安装 : docker-ce-18.06.1.ce-3.el7.x86_64
正在更新 : audit-2.8.5-4.el7.x86_64
清理 : policycoreutils-2.5-29.el7.x86_64
清理 : audit-2.8.4-4.el7.x86_64
清理 : audit-libs-2.8.4-4.el7.x86_64
验证中 : audit-libs-2.8.5-4.el7.x86_64
验证中 : audit-2.8.5-4.el7.x86_64
验证中 : docker-ce-18.06.1.ce-3.el7.x86_64
验证中 : libsemanage-python-2.5-14.el7.x86_64
验证中 : policycoreutils-2.5-34.el7.x86_64
验证中 : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch
验证中 : python-IPy-0.75-6.el7.noarch
验证中 : checkpolicy-2.5-8.el7.x86_64
验证中 : policycoreutils-python-2.5-34.el7.x86_64
验证中 : audit-libs-python-2.8.5-4.el7.x86_64
验证中 : setools-libs-3.3.8-4.el7.x86_64
验证中 : libcgroup-0.41-21.el7.x86_64
验证中 : policycoreutils-2.5-29.el7.x86_64
验证中 : audit-libs-2.8.4-4.el7.x86_64
验证中 : audit-2.8.4-4.el7.x86_64
已安装:
docker-ce.x86_64 0:18.06.1.ce-3.el7
作为依赖被安装:
audit-libs-python.x86_64 0:2.8.5-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.119.2-1.911c
python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7
作为依赖被升级:
audit.x86_64 0:2.8.5-4.el7 audit-libs.x86_64 0:2.8.5-4.el7
完毕!
[root@localhost ~]# systemctl enable docker && systemctl start docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@localhost ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a
1.1.5 添加kubernetes YUM软件源
[root@localhost ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package
> EOF
1.1.6 安装kubeadm,kubelet和kubectl
[root@localhost ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
kubernetes
kubernetes/primary
kubernetes
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.15.0-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.7.5,它被软件包 kubeadm-1.15.0-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.11.0,它被软件包 kubeadm-1.15.0-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.15.0-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.15.0-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.15.0-0.x86_64 需要
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.15.0-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x8
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成
依赖关系解决
============================================================================================================================
Package 架构
============================================================================================================================
正在安装:
kubeadm x86_64
kubectl x86_64
kubelet x86_64
为依赖而安装:
conntrack-tools x86_64
cri-tools x86_64
kubernetes-cni x86_64
libnetfilter_cthelper x86_64
libnetfilter_cttimeout x86_64
libnetfilter_queue x86_64
socat x86_64
事务概要
============================================================================================================================
安装 3 软件包 (+7 依赖软件包)
总下载量:64 M
安装大小:271 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm
(2/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
(3/10): 7143f62ad72a1eb1849d5c1e9490567d405870d2c00ab2b577f1f3bdf9f547ba-kubeadm-1.15.0-0.x86_64.rpm
(4/10): 3d5dd3e6a783afcd660f9954dec3999efa7e498cac2c14d63725fafa1b264f14-kubectl-1.15.0-0.x86_64.rpm
(5/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
(7/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
(8/10): socat-1.7.3.2-2.el7.x86_64.rpm
(9/10): 557c2f4e11a3ab262c72a52d240f2f440c63f539911ff5e05237904893fc36bb-kubelet-1.15.0-0.x86_64.rpm
(10/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
----------------------------------------------------------------------------------------------------------------------------
总计
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64
正在安装 : socat-1.7.3.2-2.el7.x86_64
正在安装 : kubectl-1.15.0-0.x86_64
正在安装 : cri-tools-1.13.0-0.x86_64
正在安装 : libnetfilter_queue-1.0.2-2.el7_2.x86_64
正在安装 : libnetfilter_cthelper-1.0.0-11.el7.x86_64
正在安装 : conntrack-tools-1.4.4-7.el7.x86_64
正在安装 : kubernetes-cni-0.8.7-0.x86_64
正在安装 : kubelet-1.15.0-0.x86_64
正在安装 : kubeadm-1.15.0-0.x86_64
验证中 : libnetfilter_cthelper-1.0.0-11.el7.x86_64
验证中 : kubeadm-1.15.0-0.x86_64
验证中 : kubernetes-cni-0.8.7-0.x86_64
验证中 : kubelet-1.15.0-0.x86_64
验证中 : libnetfilter_queue-1.0.2-2.el7_2.x86_64
验证中 : cri-tools-1.13.0-0.x86_64
验证中 : kubectl-1.15.0-0.x86_64
验证中 : socat-1.7.3.2-2.el7.x86_64
验证中 : libnetfilter_cttimeout-1.0.0-7.el7.x86_64
验证中 : conntrack-tools-1.4.4-7.el7.x86_64
已安装:
kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0
作为依赖被安装:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.8.7-0 libnetfilter_cthelper.x86
完毕!
[root@localhost ~]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service
2.1 部署Kubernetes Master
master初始化
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.165.198 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
报错了,出现[WARNING IsDockerSystemdCheck],是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致
编辑文件/usr/lib/systemd/system/docker.service, 修改ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
# ExecStart=/usr/bin/dockerd
ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
重启docker,可以看到docker info | grep Cgroup的输出变成了systemd
[root@k8s-master ~]# docker info | grep Cgroup
Cgroup Driver: cgroupfs
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# vi /usr/lib/systemd/system/docker.service
[root@k8s-master ~]# vi /usr/lib/systemd/system/docker.service
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# docker info | grep Cgroup
Cgroup Driver: systemd
再次执行,可以看到已经执行成功
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.165.198 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.165.198 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.165.198 127.0.0.1 ::1]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.165.198]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.005161 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 283bmn.5s8oey15nquac4mw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.165.198:6443 --token 283bmn.5s8oey15nquac4mw \
--discovery-token-ca-cert-hash sha256:b4e5d42f49230d88eeed7f7af3c49d6f2f3d1c1146df1640545636e8490e3175
[root@k8s-master ~]#
根据提示操作,如果需要回退init,可以kubeadm reset,同时要删除$HOME/.kube/config和/var/lib/etcd,再次执行kubeadm init即可
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]#
在其他子节点执行下面命令即可加入集群,如果执行出错,可以加上 --v=2查看具体报错信息
[root@k8s-node3 ~]# kubeadm join 192.168.165.198:6443 --token m8x4sa.ohcpv36ddk5dlivb --discovery-token-ca-cert-hash sha256:98aa44463bafe37f911b91d87b550574ef255ad665ffa62b23ebe71cd65a6519
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node3 ~]#
添加完节点后,可以通过kubectl get node查看当前节点,发现报错了,原因:kubernetes master没有与本机绑定,集群初始化的时候没有绑定,此时设置在本机的环境变量即可解决问题,如果在其他节点也出现了同样的报错,可以将/etc/kubernetes/admin.conf通过scp复制到其他节点,再设置环境变量即可解决
[root@k8s-master ~]# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[root@k8s-master ~]# source /etc/profile
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 13h v1.15.0
安装网络插件
只需要在master安装
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
[root@k8s-master ~]# ps -ef|grep flannel
root 913 890 0 14:22 ? 00:00:01 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 10725 16706 0 14:42 pts/0 00:00:00 grep --color=auto flannel
到此位置,搭建已完成
测试k8s集群
root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-pnc5g 1/1 Running 0 4m26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 443/TCP 18h
service/nginx NodePort 10.1.1.157 80:30755/TCP 4m8s
[root@k8s-master ~]# curl http://192.168.165.198:30755
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
如上所示,已经能访问到nginx了
PS:由于在之前,我漏掉了一个步骤,没有指定--pod-network-cidr=10.244.0.0/16,导致安装flannel持续报错,花费好多时间查找
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.165.198 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
如果未指定--pod-network-cidr=10.244.0.0/16,则会出现下列报错
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-bccdc95cf-rd6w2 0/1 ContainerCreating 0 17h
kube-system coredns-bccdc95cf-rkrtm 0/1 ContainerCreating 0 17h
kube-system etcd-k8s-master 1/1 Running 0 17h
kube-system kube-apiserver-k8s-master 1/1 Running 0 17h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 17s
kube-system kube-flannel-ds-amd64-h4gj5 0/1 CrashLoopBackOff 39 3h1m
kube-system kube-flannel-ds-amd64-qv52d 0/1 CrashLoopBackOff 39 3h1m
kube-system kube-flannel-ds-amd64-tddjl 0/1 Error 40 3h1m
kube-system kube-flannel-ds-amd64-z2gbl 0/1 CrashLoopBackOff 39 3h1m
kube-system kube-proxy-4zkbp 1/1 Running 0 3h22m
kube-system kube-proxy-lrgz7 1/1 Running 0 17h
kube-system kube-proxy-nrxdd 1/1 Running 0 3h4m
kube-system kube-proxy-vws6d 1/1 Running 0 3h23m
kube-system kube-scheduler-k8s-master 1/1 Running 0 17h
查看其具体日志发现,报错为Error registering network: failed to acquire lease: node "k8s-node3" pod cidr not assigned
[root@k8s-master ~]# kubectl logs kube-flannel-ds-amd64-h4gj5
Error from server (NotFound): pods "kube-flannel-ds-amd64-h4gj5" not found
[root@k8s-master ~]# kubectl logs kube-flannel-ds-amd64-h4gj5 -n kube-system
I0805 06:17:12.825680 1 main.go:514] Determining IP address of default interface
I0805 06:17:12.826441 1 main.go:527] Using interface with name ens192 and address 192.168.165.193
I0805 06:17:12.826499 1 main.go:544] Defaulting external address to interface address (192.168.165.193)
I0805 06:17:13.020546 1 kube.go:126] Waiting 10m0s for node controller to sync
I0805 06:17:13.020895 1 kube.go:309] Starting kube subnet manager
I0805 06:17:14.021040 1 kube.go:133] Node controller sync successful
I0805 06:17:14.021153 1 main.go:244] Created subnet manager: Kubernetes Subnet Manager - k8s-node3
I0805 06:17:14.021176 1 main.go:247] Installing signal handlers
I0805 06:17:14.021348 1 main.go:386] Found network config - Backend type: vxlan
I0805 06:17:14.021504 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0805 06:17:14.022070 1 main.go:289] Error registering network: failed to acquire lease: node "k8s-node3" pod cidr not assigned
I0805 06:17:14.022189 1 main.go:366] Stopping shutdownHandler...
上述报错是指未指定pod cidr,需要vim /etc/kubernetes/manifests/kube-controller-manager.yaml
command中增加两个参数:
--allocate-node-cidrs=true
--cluster-cidr=10.244.0.0/16
[root@k8s-master ~]# vi /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
- --allocate-node-cidrs=true
- --cluster-cidr=10.244.0.0/16
image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
再执行systemctl restart kubelet,即可看到全部都变成了raedy的状态
[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-bccdc95cf-rd6w2 1/1 Running 0 18h
kube-system coredns-bccdc95cf-rkrtm 1/1 Running 0 18h
kube-system etcd-k8s-master 1/1 Running 0 18h
kube-system kube-apiserver-k8s-master 1/1 Running 0 18h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 5m42s
kube-system kube-flannel-ds-amd64-h4gj5 1/1 Running 41 3h7m
kube-system kube-flannel-ds-amd64-qv52d 1/1 Running 40 3h7m
kube-system kube-flannel-ds-amd64-tddjl 1/1 Running 42 3h7m
kube-system kube-flannel-ds-amd64-z2gbl 1/1 Running 40 3h7m
kube-system kube-proxy-4zkbp 1/1 Running 0 3h27m
kube-system kube-proxy-lrgz7 1/1 Running 0 18h
kube-system kube-proxy-nrxdd 1/1 Running 0 3h10m
kube-system kube-proxy-vws6d 1/1 Running 0 3h28m
kube-system kube-scheduler-k8s-master 1/1 Running 0 18h
安装结束后一些配置文件路径:
[root@k8s-master manifests]# pwd
/etc/kubernetes/manifests
[root@k8s-master manifests]# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
参考
- kubeadm 快速部署K8S集群