参考
使用kubeadm安装kubernetes_v1.20.x | Kuboard
从零搭建k8s集群 - 许大仙 - 博客园
CentOS8.0通过yum安装ntp同步时间 - 吴昊博客
CentOS 8同步时间 - tlanyan
yum安装指定版本docker -
节点加入k8s集群如何获取token等参数值_魂醉的一亩二分地-CSDN博客
CentOS - Docker —— 从入门到实践
selinux详解及配置文件 - 大熊子 - 博客园
k8s常用命令 - 云+社区 - 腾讯云
k8s排错
Kubernetes 架构 · Kubernetes Handbook - Kubernetes中文指南/云原生应用架构实践手册 by Jimmy Song(宋净超)
简介
- kubelet:k8s后端服务,接收并执行master发来的指令
- kubectl:k8s控制器,用于向k8s后端发起服务操作请求的工具
- kubeadm:k8s管理员控制器,权限比kubectl高,多用于集群初始化
- kube-proxy:实现pod服务负载均衡
- etcd: 每一节点均有,服务发现,共享配置
- kube-apiserver:提供集群对外访问功能
[image:953B6305-D5ED-4633-B841-012BB838B13B-943-00010361CA98AEA2/AA126061-201F-4424-8B44-65C8C4297B26.png]
搭建步骤
- 准备两台服务器
一台master,一台node,搭建master单节点集群环境,选用阿里云ecs按量计费服务器
[image:5B20920A-8EB1-49DB-AFFA-E54A2F6D687A-427-000011201F8771E1/711F636B-5406-45D1-9739-D2F3520AA3FB.png]
master:8.140.134.10
node1:8.140.109.105
master节点服务器参数要求2核以上,内存4g以上
- 设置hostname
hostnamectl set-hostname master/node
查看设置结果
[root@k8s-master ~]# hostnamectl status
Static hostname: k8s-master
Icon name: computer-vm
Chassis: vm
Machine ID: 20201120143309750601020764519652
Boot ID: 5d3b7ad7a3174bbca92120abc8c93bd5
Virtualization: kvm
Operating System: CentOS Linux 8 (Core)
CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-193.28.1.el8_2.x86_64
Architecture: x86-64
[root@k8s-master ~]#
- 关闭防火墙
[root@k8s-master ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
这里是默认已经关闭了防火墙,没有关闭可以执行
systemctl stop firewalld
,并执行systemctl disable firewalld
禁止防火墙开机自启动
- 关闭selinux
为了允许容器访问宿主机的文件系统,避免麻烦,需要关闭selinux
[root@k8s-master ~]#
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
selinux默认是关闭的,如果没有关闭,可以执行
sed -i 's/enforcing/disabled/' /etc/selinux/config
将对应selinux参数替换为disabled,然后重启服务器
- 关闭swap分区
Swap分区开启的目的是为了防止内存不足时临时使用一部分磁盘作为内存使用,但是性能会有所下降,毕竟磁盘读写性能差
[root@k8s-master ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Nov 20 06:36:28 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=edf839fd-8e1a-4373-946a-c32c9b459611 / xfs defaults 0 0
[root@k8s-master ~]# free -h
total used free shared buff/cache available
Mem: 15Gi 196Mi 14Gi 1.0Mi 288Mi 14Gi
Swap: 0B 0B 0B
默认没有开启swap分区,如果需要关闭swap分区,执行
sed -ri 's/.*swap.*/#&/' /etc/fstab
,然后重启
- 设置hosts文件
cat >> /etc/hosts << EOF
8.140.134.10 k8s-master
8.140.109.105 k8s-node1
EOF
互ping测试,注意关闭防火墙
[root@k8s-master ~]# ping k8s-node1
PING k8s-node1 (8.140.109.105) 56(84) bytes of data.
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=1 ttl=62 time=0.337 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=2 ttl=62 time=0.288 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=3 ttl=62 time=0.303 ms
64 bytes from k8s-node1 (8.140.109.105): icmp_seq=4 ttl=62 time=0.299 ms
[root@k8s-node1 ~]# ping k8s-master
PING k8s-master (8.140.134.10) 56(84) bytes of data.
64 bytes from k8s-master (8.140.134.10): icmp_seq=1 ttl=62 time=0.328 ms
64 bytes from k8s-master (8.140.134.10): icmp_seq=2 ttl=62 time=0.318 ms
64 bytes from k8s-master (8.140.134.10): icmp_seq=3 ttl=62 time=0.375 ms
- 安装docker
- 安装仓库管理工具::yum-utils::
yum install -y yum-utils
[root@k8s-master ~]# rpm -qa| grep yum-utils
yum-utils-4.0.17-5.el8.noarch
- 添加docker-ce仓库
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查看仓库
[root@k8s-master ~]# cd /etc/yum.repos.d/
[root@k8s-master yum.repos.d]# ls
CentOS-AppStream.repo CentOS-centosplus.repo CentOS-Debuginfo.repo CentOS-Extras.repo CentOS-Media.repo CentOS-Sources.repo
CentOS-Base.repo CentOS-CR.repo CentOS-epel.repo CentOS-fasttrack.repo CentOS-PowerTools.repo CentOS-Vault.repo
添加仓库
[root@k8s-master yum.repos.d]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加仓库自:https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master yum.repos.d]#
[root@k8s-master yum.repos.d]# ls
CentOS-AppStream.repo CentOS-centosplus.repo CentOS-Debuginfo.repo CentOS-Extras.repo CentOS-Media.repo CentOS-Sources.repo docker-ce.repo
CentOS-Base.repo CentOS-CR.repo CentOS-epel.repo CentOS-fasttrack.repo CentOS-PowerTools.repo CentOS-Vault.repo
- 安装指定版本docker-ce
1. 移除已有docker,如果已有docker的话
[root@k8s-node1 ~]# rpm -qa| grep docker
docker-ce-20.10.0-3.el8.x86_64
docker-ce-cli-20.10.3-3.el8.x86_64
docker-ce-rootless-extras-20.10.3-3.el8.x86_64
[root@k8s-master ~]# sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
1. 查看docker-ce版本
[root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r
docker-ce.x86_64 3:20.10.3-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.2-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.1-3.el8 docker-ce-stable
docker-ce.x86_64 3:20.10.0-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.15-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.14-3.el8 docker-ce-stable
docker-ce.x86_64 3:19.03.13-3.el8 docker-ce-stable
上次元数据过期检查:0:00:53 前,执行于 2021年02月19日 星期五 14时59分28秒。
可安装的软件包
2. 安装指定版本
yum install -y docker-ce-19.03.15
3. 启动docker并设置开机自启动
systemctl start docker
systemctl enable docker.service
[root@k8s-master ~]# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.0
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
- 设置docker镜像加速器
[root@k8s-master ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart docker
其中"exec-opts": ["native.cgroupdriver=systemd”],需要设置,且kubelet的driver需要保持一致,否则获取镜像获取启动都有问题
- 安装kubeadm,kubelet和kubectl
1. 添加kubernets仓库源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2. 安装
yum install -y kubelet-1.20.1 kubeadm-1.20.1 kubectl-1.20.1
3. 配置kubelet的cgroup driver
保证kubelet的镜像拉去和docker没有冲突
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
4. 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet
如果已经启动kubelet,这里通过
systemctl status kubelet
查看状态会发现kubelet.service: main process exited, code=exited, status=255/n/a
通过日志命令journalctl -xefu kubelet
和资料查询可以发现kubelet在init之前都会不断重启导致错误,可以不管直接init之后会正常
- 初始化master
第一种方式
- 初始化配置文件
1. 初始化默认配置
kubeadm config print init-defaults > kubeadm.yaml
2. 修改初始化配置
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.89.142
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
#imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.20.1
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
- advertiseAddress: 172.16.89.142,这里设置为appserver设置为本地内网ip
- imageRepository: registry.aliyuncs.com/google_containers,修改为阿里云镜像,防止获取镜像失败导致master启动失败
- podSubnet: 10.244.0.0/16,pod网段,flannel插件需要使用这个网段
- kubernetesVersion: v1.20.1版本需要和安装的kubelet版本一致
- 查看初始化需要用到镜像
查看初始化需要用到的镜像
[root@k8s-master ~]# kubeadm config images list --config kubeadm.yaml
k8s.gcr.io/kube-apiserver:v1.20.1
k8s.gcr.io/kube-controller-manager:v1.20.1
k8s.gcr.io/kube-scheduler:v1.20.1
k8s.gcr.io/kube-proxy:v1.20.1
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
提前从阿里云把镜像下载下来并打上tag
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
或者设置初始化配置文件的镜像为阿里云,直接拉去下来
[root@k8s-master ~]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
这里选择所有镜像从阿里云下载,如果下载不成功,也可以统一tag到本地docker或者统一从habor镜像仓库下载
- 初始化master
kubeadm init --config kubeadm.yaml --v=5
[root@k8s-master ~]# kubeadm init --config kubeadm.yaml --v=5
I0223 09:39:07.830262 3023 initconfiguration.go:201] loading configuration from "kubeadm.yaml"
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
I0223 09:39:07.900068 3023 checks.go:577] validating Kubernetes and kubeadm version
I0223 09:39:07.900088 3023 checks.go:166] validating if the firewall is enabled and active
I0223 09:39:07.923727 3023 checks.go:201] validating availability of port 6443
I0223 09:39:07.923822 3023 checks.go:201] validating availability of port 10259
I0223 09:39:07.923845 3023 checks.go:201] validating availability of port 10257
I0223 09:39:07.923865 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0223 09:39:07.923874 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0223 09:39:07.923881 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0223 09:39:07.923889 3023 checks.go:286] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0223 09:39:07.923897 3023 checks.go:432] validating if the connectivity type is via proxy or direct
I0223 09:39:07.923925 3023 checks.go:471] validating http connectivity to first IP address in the CIDR
I0223 09:39:07.923948 3023 checks.go:471] validating http connectivity to first IP address in the CIDR
I0223 09:39:07.923959 3023 checks.go:102] validating the container runtime
I0223 09:39:07.984679 3023 checks.go:128] validating if the "docker" service is enabled and active
I0223 09:39:08.059694 3023 checks.go:335] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0223 09:39:08.059742 3023 checks.go:335] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0223 09:39:08.059759 3023 checks.go:649] validating whether swap is enabled or not
I0223 09:39:08.059784 3023 checks.go:376] validating the presence of executable conntrack
I0223 09:39:08.059805 3023 checks.go:376] validating the presence of executable ip
I0223 09:39:08.059818 3023 checks.go:376] validating the presence of executable iptables
I0223 09:39:08.059831 3023 checks.go:376] validating the presence of executable mount
I0223 09:39:08.059855 3023 checks.go:376] validating the presence of executable nsenter
I0223 09:39:08.059870 3023 checks.go:376] validating the presence of executable ebtables
I0223 09:39:08.059883 3023 checks.go:376] validating the presence of executable ethtool
I0223 09:39:08.059894 3023 checks.go:376] validating the presence of executable socat
I0223 09:39:08.059909 3023 checks.go:376] validating the presence of executable tc
I0223 09:39:08.059922 3023 checks.go:376] validating the presence of executable touch
I0223 09:39:08.059937 3023 checks.go:520] running all checks
I0223 09:39:08.126650 3023 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.100.2.136:53: no such host
I0223 09:39:08.127123 3023 checks.go:618] validating kubelet version
I0223 09:39:08.184792 3023 checks.go:128] validating if the "kubelet" service is enabled and active
I0223 09:39:08.195795 3023 checks.go:201] validating availability of port 10250
I0223 09:39:08.195847 3023 checks.go:201] validating availability of port 2379
I0223 09:39:08.195865 3023 checks.go:201] validating availability of port 2380
I0223 09:39:08.195885 3023 checks.go:249] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0223 09:39:08.224823 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
I0223 09:39:08.252056 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
I0223 09:39:08.280014 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
I0223 09:39:08.307725 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.20.1
I0223 09:39:08.335600 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/pause:3.2
I0223 09:39:08.364146 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
I0223 09:39:08.392096 3023 checks.go:839] image exists: registry.aliyuncs.com/google_containers/coredns:1.7.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0223 09:39:08.392141 3023 certs.go:110] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0223 09:39:08.605798 3023 certs.go:474] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.35.194]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0223 09:39:08.921838 3023 certs.go:110] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0223 09:39:08.978884 3023 certs.go:474] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0223 09:39:09.232775 3023 certs.go:110] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0223 09:39:09.269962 3023 certs.go:474] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.17.35.194 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.17.35.194 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0223 09:39:09.759214 3023 certs.go:76] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0223 09:39:09.809489 3023 kubeconfig.go:101] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0223 09:39:09.871803 3023 kubeconfig.go:101] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0223 09:39:09.929750 3023 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0223 09:39:09.999157 3023 kubeconfig.go:101] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0223 09:39:10.240935 3023 kubelet.go:63] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0223 09:39:10.374064 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.374422 3023 certs.go:474] validating certificate period for CA certificate
I0223 09:39:10.374495 3023 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0223 09:39:10.374502 3023 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0223 09:39:10.374507 3023 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0223 09:39:10.388510 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0223 09:39:10.388531 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.388793 3023 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0223 09:39:10.388801 3023 manifests.go:109] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0223 09:39:10.388807 3023 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0223 09:39:10.388812 3023 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0223 09:39:10.388818 3023 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0223 09:39:10.389622 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0223 09:39:10.389637 3023 manifests.go:96] [control-plane] getting StaticPodSpecs
I0223 09:39:10.389896 3023 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0223 09:39:10.390427 3023 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0223 09:39:10.391160 3023 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0223 09:39:10.391171 3023 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.001635 seconds
I0223 09:39:24.394483 3023 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0223 09:39:24.401973 3023 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
I0223 09:39:24.406809 3023 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0223 09:39:24.406822 3023 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0223 09:39:25.435228 3023 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I0223 09:39:25.435520 3023 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0223 09:39:25.435693 3023 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0223 09:39:25.437065 3023 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0223 09:39:25.439866 3023 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0223 09:39:25.440473 3023 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I0223 09:39:25.818119 3023 request.go:591] Throttling request took 70.393404ms, request: POST:https://172.17.35.194:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.17.35.194:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
依次执行下面命令完成初始化
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果启动失败,那么执行
kubeadm reset
重制,rm -rf /etc/kubernetes/*
,如果初始化出现etcd不为空,那么也需要删除etcd信息rm -rf /var/lib/etcd
第二种方式
kubeadm init --kubernetes-version=1.20.1 --pod-network-cidr 10.244.0.0/16 --v=5
- 查看健康状态
[root@k8s-master ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
如果不是全部健康需要到对应的文件目录
/etc/kubernetes/manifests/
把对应的端口号注释掉
- 加入node
[root@k8s-node2 ~]# kubeadm join 172.17.35.194:6443 --token avy7qv.zffy2i3bdivbp57x --discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0223 09:54:40.991309 9033 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-node2 ~]# kubeadm join 172.17.35.194:6443 --token avy7qv.zffy2i3bdivbp57x --discovery-token-ca-cert-hash sha256:530d61b5d25258f27bfce7e1d53b0604bd94164eb7cd926cb79186ae350935bd
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
如果该节点已经加入过其他master,或者无法加入,执行
kubeadm reset
重制node环境再加入
- 部署CNI网络插件
1. 查看master初始化状态,网络插件pod未初始化
[root@k8s-master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7f89b7bc75-2gqch 0/1 ContainerCreating 0 85s
kube-system coredns-7f89b7bc75-9pkxr 0/1 ContainerCreating 0 85s
kube-system etcd-k8s-master 1/1 Running 0 92s
kube-system kube-apiserver-k8s-master 1/1 Running 0 92s
kube-system kube-controller-manager-k8s-master 0/1 Running 0 58s
kube-system kube-proxy-x6qb4 1/1 Running 0 86s
kube-system kube-scheduler-k8s-master 0/1 Running 0 76s
2. 下载部署文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
3. 部署网络插件
kubectl apply -f kube-flannel.yml
在master节点查看集群情况
[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-7f89b7bc75-2gqch 1/1 Running 0 18m 10.244.0.3 k8s-master
kube-system coredns-7f89b7bc75-9pkxr 1/1 Running 0 18m 10.244.0.2 k8s-master
kube-system etcd-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 0 17m 172.17.35.194 k8s-master
kube-system kube-flannel-ds-jwk99 1/1 Running 0 3m9s 172.17.35.192 k8s-node2
kube-system kube-flannel-ds-mf696 1/1 Running 0 14s 172.17.35.189 k8s-node1
kube-system kube-flannel-ds-mtvdr 1/1 Running 0 8m52s 172.17.35.194 k8s-master
kube-system kube-proxy-5xbwq 1/1 Running 0 14s 172.17.35.189 k8s-node1
kube-system kube-proxy-l4txf 1/1 Running 0 3m9s 172.17.35.192 k8s-node2
kube-system kube-proxy-x6qb4 1/1 Running 0 18m 172.17.35.194 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 0 18m 172.17.35.194 k8s-master
其中node的kubelet版本不能低于masterkubelet版本,否则无法初始化到集群中
如果集群pod有问题,运行kubectl -n kube-system describe pod pod-name
查看pod启动失败的原因,参考