K8sMaster : 管理K8sNode的。
K8sNode:具有docker环境 和k8s组件(kubelet、k-proxy) ,载有容器服务的工作节点。
Controller-manager: k8s 的大脑,它通过 API Server监控和管理整个集群的状态,并确保集群处于预期的工作状态。
API Server: k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。
etcd: 高可用强一致性的服务发现存储仓库,kubernetes集群中,etcd主要用于配置共享和服务发现
Scheduler: 主要是为新创建的pod在集群中寻找最合适的node,并将pod调度到K8sNode上。
kubelet: 作为连接Kubernetes Master和各Node之间的桥梁,用于处理Master下发到本节点的任务,管理 Pod及Pod中的容器
k-proxy 是 kubernetes 工作节点上的一个网络代理组件,运行在每个节点上,维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。监听 API server 中 资源对象的变化情况,代理后端来为服务配置负载均衡。
Pod: 一组容器的打包环境。在Kubernetes集群中,Pod是所有业务类型的基础,也是K8S管理的最小单位级,它是一个或多个容器的组合。这些容器共享存储、网络和命名空间,以及如何运行的规范。(k8s =学校、pod = 班级、容器= 学生)
首先将文件下载下来, cd /root && yum install -y git && git clone https://gitee.com/hanfeng_edu/mastering_kubernetes.git
...略...
公钥(id_dsa.pub)、私钥(id_dsa)、授权列表文件(authorized_keys)
/etc/ssh/sshd_config 配置文件注意
ChallengeResponseAuthentication no
PermitRootLogin yes# 一般只要这个就可以
PasswordAuthentication yes
PubkeyAuthentication yes
/etc/hosts
192.168.100.8 k8sMaster-1
192.168.100.9 k8sNode-1
192.168.100.10 k8sNode-2
#!/bin/bash
################# 系统环境配置 #####################
# 关闭 Selinux/firewalld
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 关闭交换分区
swapoff -a
cp /etc/{fstab,fstab.bak}
cat /etc/fstab.bak | grep -v swap > /etc/fstab
# 设置 iptables
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
modprobe br_netfilter
sysctl -p
# 同步时间
yum install -y ntpdate
ln -nfsv /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#!/bin/bash
cat > /etc/sysconfig/modules/ipvs.modules < /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
参考:https://docs.docker.com/engine/install/centos/
移除老版本
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
安装所需依赖库
yum install –y yum-utils device-mapper-persistent-data lvm2
添加软件源信息
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
更新并安装Docker-CE
yum makecache fast
yum install docker-ce-18.06.3.ce-3.el7 docker-ce-cli-18.06.3.ce-3.el7 containerd.io -y
配置Docker镜像加速器等
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxxx.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload &&
sudo systemctl restart docker
#!/bin/bash
# 安装软件可能需要的依赖关系
yum install -y yum-utils device-mapper-persistent-data lvm2
# 配置使用阿里云仓库,安装Kubernetes工具
cat > /etc/yum.repos.d/kubernetes.repo < /etc/docker/daemon.json <
1.在master节点配置K8S配置文件
cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controlPlaneEndpoint: "192.168.100.8:6443"
apiServer:
certSANs:
- 192.168.100.8
networking:
podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
上面配置文件中 192.168.100.8 是master 配置文件
2. 执行如下命令初始化集群
# kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
# mkdir -p $HOME/.kube
# cp -f /etc/kubernetes/admin.conf ${HOME}/.kube/config
# curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed "[email protected]/[email protected]/16@g" | kubectl apply -f -
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
3. Worker节点加入master集群
# kubeadm join 192.168.100.8:6443 --token hrz6jc.8oahzhyv74yrpem5 \
--discovery-token-ca-cert-hash sha256:25f51d27d64c55ea9d89d5af839b97d37dfaaf0413d00d481f7f59bd6556ee43
4. 查看集群状态
# kubectl get nodes
1. 在三个master节点安装keepalived软件
# yum install -y socat keepalived ipvsadm conntrack
#!/bin/bash
cat > /etc/sysconfig/modules/ipvs.modules < /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe \${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
2. 创建如下keepalived的配置文件
# cat /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 80
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass just0kk
}
virtual_ipaddress {
192.168.100.199
}
}
virtual_server 192.168.100.199 6443 {
delay_loop 6
lb_algo loadbalance
lb_kind DR
net_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.100.13 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
real_server 192.168.100.12 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
real_server 192.168.100.14 6443 {
weight 1
SSL_GET {
url {
path /healthz
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
### 此部分要将配置文件分发到另外两台机器
# scp /etc/keepalived/keepalived.conf [email protected]:/etc/keepalived/
keepalived.conf 100% 1257 1.7MB/s 00:00
# scp /etc/keepalived/keepalived.conf [email protected]:/etc/keepalived/
然后每一台机器的 priority 100 参数搞成不一样即可
三台机器全部启动,能ping通虚拟地址即为成功
# systemctl start keepalived
# ping 192.168.100.199
PING 192.168.100.199 (192.168.100.199) 56(84) bytes of data.
64 bytes from 192.168.100.199: icmp_seq=1 ttl=64 time=0.064 ms
--- 192.168.100.199 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms
3. 创建k8s集群初始化配置文件,在某一台主节点执行即可
$ cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controlPlaneEndpoint: "192.168.100.199:6443"
apiServer:
certSANs:
- 192.168.100.12
- 192.168.100.13
- 192.168.100.14
- 192.168.100.199
networking:
podSubnet: 10.244.0.0/16
imageRepository: "registry.aliyuncs.com/google_containers"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
4. 初始化k8s集群
# kubeadm init --config /etc/kubernetes/kubeadm-config.yaml
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# curl -fsSL https://docs.projectcalico.org/v3.9/manifests/calico.yaml| sed "[email protected]/[email protected]/16@g" | kubectl apply -f -
查看容器状态 kubectl get pods -n kube-system
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7cc97544d-lx8zn 1/1 Running 0 67s
calico-node-v8zql 1/1 Running 0 67s
coredns-9d85f5447-s5q64 1/1 Running 0 77s
coredns-9d85f5447-wv4c5 1/1 Running 0 77s
etcd-gcp-honkong-k8s-doc04 1/1 Running 0 94s
kube-apiserver-gcp-honkong-k8s-doc04 1/1 Running 0 94s
kube-controller-manager-gcp-honkong-k8s-doc04 1/1 Running 0 94s
kube-proxy-mg6ns 1/1 Running 0 77s
kube-scheduler-gcp-honkong-k8s-doc04 1/1 Running 0 94s
5. 各个master之间建立无密码可以互访,然后执行如下:
# cat k8s-cluster-other-init.sh
#!/bin/bash
IPS=(192.168.100.12 192.168.100.13)
JOIN_CMD=`kubeadm token create --print-join-command 2> /dev/null`
for index in 0 1; do
ip=${IPS[${index}]}
ssh $ip "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
scp /etc/kubernetes/pki/ca.crt $ip:/etc/kubernetes/pki/ca.crt
scp /etc/kubernetes/pki/ca.key $ip:/etc/kubernetes/pki/ca.key
scp /etc/kubernetes/pki/sa.key $ip:/etc/kubernetes/pki/sa.key
scp /etc/kubernetes/pki/sa.pub $ip:/etc/kubernetes/pki/sa.pub
scp /etc/kubernetes/pki/front-proxy-ca.crt $ip:/etc/kubernetes/pki/front-proxy-ca.crt
scp /etc/kubernetes/pki/front-proxy-ca.key $ip:/etc/kubernetes/pki/front-proxy-ca.key
scp /etc/kubernetes/pki/etcd/ca.crt $ip:/etc/kubernetes/pki/etcd/ca.crt
scp /etc/kubernetes/pki/etcd/ca.key $ip:/etc/kubernetes/pki/etcd/ca.key
scp /etc/kubernetes/admin.conf $ip:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf $ip:~/.kube/config
ssh ${ip} "${JOIN_CMD} --control-plane"
done
https://gitee.com/hanfeng_edu/mastering_kubernetes.git
hostnamectl set-hostname k8s-01 #所有机器按照要求修改
bash #刷新主机名
cat >> /etc/hosts <>/etc/rc.sysinit</etc/sysconfig/modules/br_netfilter.modules
echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/ip_conntrack.modules
内核优化
cat > kubernetes.conf <> /proc/sys/net/ipv4/ip_forward
done
bridge-nf 使得netfilter可以对Linux网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:
net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。
cat > /etc/sysconfig/modules/ipvs.modules <
[root@k8s-03 ~]# rpm -qa | grep libseccomp
libseccomp-2.3.1-4.el7.x86_64
[root@k8s-03 ~]# rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps
wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm
# 创建containerd 文件
mkdir /etc/containerd -p
# containerd 安装
wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz
tar zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C / #我们直接让它给我们对应的目录给替换掉
etc/
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
etc/crictl.yaml
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
usr/
usr/local/
usr/local/sbin/
usr/local/sbin/runc
usr/local/bin/
usr/local/bin/crictl
usr/local/bin/ctd-decoder
usr/local/bin/ctr
usr/local/bin/containerd-shim
usr/local/bin/containerd
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/critest
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/containerd-stress
opt/
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/version
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/gce/env
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/cni/
opt/cni/bin/
opt/cni/bin/firewall
opt/cni/bin/portmap
opt/cni/bin/host-local
opt/cni/bin/ipvlan
opt/cni/bin/host-device
opt/cni/bin/sbr
opt/cni/bin/vrf
opt/cni/bin/static
opt/cni/bin/tuning
opt/cni/bin/bridge
opt/cni/bin/macvlan
opt/cni/bin/bandwidth
opt/cni/bin/vlan
opt/cni/bin/dhcp
opt/cni/bin/loopback
opt/cni/bin/ptp
安装完成之后 要加环境变量 https://www.orchome.com/16586 参考这个
echo "export PATH=$PATH:/usr/local/bin" >> /etc/profile && . /etc/profile
containerd config default > /etc/containerd/config.toml
# 替换默认pause镜像地址
#首先我们在原有的基础上添加一个host,只需要在master节点上执行即可
cat >>/etc/hosts<< EOF
192.168.100.18 k8s-master-01
192.168.100.19 k8s-master-02
192.168.100.20 k8s-master-03
192.168.100.199 apiserver.cc100.cn
EOF
#编译安装nginx
#安装依赖
yum install pcre pcre-devel openssl openssl-devel gcc gcc-c++ automake autoconf libtool make wget vim lrzsz -y
wget https://nginx.org/download/nginx-1.20.2.tar.gz
tar xf nginx-1.20.2.tar.gz
cd nginx-1.20.2/
useradd nginx -s /sbin/nologin -M
./configure --prefix=/opt/nginx/ --with-pcre --with-http_ssl_module --with-http_stub_status_module --with-stream --with-http_stub_status_module --with-http_gzip_static_module
make && make install
#使用systemctl管理,并设置开机启动
cat >/usr/lib/systemd/system/nginx.service< /etc/keepalived/keepalived.conf <
首先我们需要在k8s-01配置kubeadm源
下面kubeadm操作只需要在k8s-01上即可
替换源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludes=kubernetes
systemctl enable --now kubelet
打印默认信息
kubeadm config print init-defaults >kubeadm-init.yaml
以下要进行自定义修改
[root@k8s-01 ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.100.18 #k8s-01 ip地址
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-01
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
extraArgs:
etcd-servers: https://192.168.100.18:2379,https://192.168.100.19:2379,https://192.168.100.20:2379
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.5
controlPlaneEndpoint: apiserver.cc100.cn:8443 #高可用地址,我这里填写vip
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # kube-proxy 模式
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
cgroupDriver: systemd # 配置 cgroup driver
logging: {}
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
检测一下语法
kubeadm init --config kubeadm-init.yaml --dry-run
预拉取镜像
kubeadm config images list --config kubeadm-init.yaml
wget https://d.frps.cn/file/kubernetes/image/k8s_all_1.23.5.tar
ctr -n k8s.io i import k8s_all_1.23.5.tar
#拷贝到其它节点
for i in k8s-02 k8s-03 k8s-04 k8s-05;do
scp k8s_all_1.23.5.tar root@$i:/root/
ssh root@$i ctr -n k8s.io i import k8s_all_1.23.5.tar
done
kubeadm不会安装或管理kubelet,kubectl因此需要确保它们kubeadm和Kubernetes版本相匹配。如果不这样,则存在版本偏差的风险。但是,支持kubelet和k8s之间的一个小版本偏差,但kubelet版本可能永远不会超过API Server版本
#下载1.23.5 kubectl工具
[root@k8s-01 ~]# curl -LO https://dl.k8s.io/release/v1.23.5/bin/linux/amd64/kubectl
[root@k8s-01 ~]# chmod +x kubectl && mv kubectl /usr/local/bin/
#检查kubectl工具版本号
[root@k8s-01 ~]# kubectl version --client --output=yaml
clientVersion:
buildDate: "2022-03-16T15:58:47Z"
compiler: gc
gitCommit: c285e781331a3785a7f436042c65c5641ce8a9e9
gitTreeState: clean
gitVersion: v1.23.5
goVersion: go1.17.8
major: "1"
minor: "23"
platform: linux/amd64
#拷贝kubectl到其它master节点
for i in k8s-02 k8s-03;do
scp /usr/local/bin/kubectl root@$i:/usr/local/bin/kubectl
ssh root@$i chmod +x /usr/local/bin/kubectl
done
初始化
kubeadm init --config kubeadm-init.yaml --upload-certs
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \
--control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a
记住init后打印的token,复制kubectl的kubeconfig,kubectl的kubeconfig路径默认是~/.kube/config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
初始化的配置文件为保存在configmap里面
kubectl -n kube-system get cm kubeadm-config -o yaml
查看版本
[root@k8s-01 kubernetes]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane,master 2m1s v1.23.5
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装相关组件
yum install -y kubelet-1.23.5 kubeadm-1.23.5 kubectl-1.23.5 --disableexcludes=kubernetes
systemctl enable --now kubelet
其他节点加入主集群
kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a \
--control-plane --certificate-key b67f07fdb20c39157f9a052902f48a83e1624dc4682ea9cd6db752bf31028a8d
添加完成之后显示
...略...
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@k8s-03 ~]# mkdir -p $HOME/.kube
[root@k8s-03 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-03 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-03 ~]#
[root@k8s-03 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-01 Ready control-plane,master 7m18s v1.23.5
k8s-02 Ready control-plane,master 65s v1.23.5
k8s-03 Ready control-plane,master 15s v1.23.5
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubeadm-1.23.5 --disableexcludes=kubernetes
systemctl enable kubelet.service
kubeadm join apiserver.cc100.cn:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:97660eefdb5f8f4c809fe6d063e77b596aae307864045195db3609a83b54415a
加入主节点 在k8s-01 主节点中打出
kubeadm token create --print-join-command
https://i4t.com/5488.html