目前已实践Centos7.9/Rocky9.1
ip | hostname | 系统 | CPU | 内存 | 系统盘 | 数据盘 | 备注 |
---|---|---|---|---|---|---|---|
192.168.2.37 | k8s-node01 | Centos7.9 | 2 | 8 | 100G | ||
192.168.2.38 | k8s-node02 | Centos7.9 | 2 | 8 | 100G | ||
192.168.2.39 | k8s-node03 | Centos7.9 | 2 | 8 | 100G | ||
192.168.2.40 | k8s-master | Centos7.9 | 2 | 2 | 100G |
k8s-node01 | k8s-node02 | k8s-node03 | k8s-master | 备注 |
---|---|---|---|---|
必备安装rsync
yum install -y rsync
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
echo Not Enough Arguement!
exit;
fi
#2. 遍历集群所有机器
for host in k8s-master k8s-node01 k8s-node02 k8s-node03
do
echo ==================== $host ====================
#3. 遍历所有目录,挨个发送
for file in $@
do
#4. 判断文件是否存在
if [ -e $file ]
then
#5. 获取父目录
pdir=$(cd -P $(dirname $file); pwd)
#6. 获取当前文件的名称
fname=$(basename $file)
ssh $host "mkdir -p $pdir"
rsync -av $pdir/$fname $host:$pdir
else
echo $file does not exists!
fi
done
done
#!/bin/bash
# 获取控制台指令
cmd=$*
# 判断指令是否为空
if [ ! -n "$cmd" ]
then
echo "command can not be null !"
exit
fi
# 获取当前登录用户
user=`whoami`
# 在从机执行指令,这里需要根据你具体的集群情况配置,host与具体主机名一致,同上
for host in k8s-master k8s-node01 k8s-node02 k8s-node03
do
echo "================current host is $host================="
echo "--> excute command \"$cmd\""
ssh $user@$host $cmd
done
名称 | 版本号 |
---|---|
kubeadm、kubelet、kubectl | 1.27.0 |
containerd | 1.6.21 |
[root@k8s-master ~]# hostnamectl set-hostname k8s-master && bash
#关闭selinux
[root@k8s-master ~]# sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#关闭防火墙
[root@k8s-master ~]# systemctl stop firewalld.service ; systemctl disable firewalld
[root@k8s-master ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@k8s-master ~]# cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]# yum clean all
[root@k8s-master ~]# yum makecache
[root@k8s-master ~]# timedatectl set-timezone Asia/Shanghai
#安装ntpdate
[root@k8s-master ~]# yum install ntpdate -y
#同步时间
[root@k8s-master ~]# ntpdate ntp1.aliyun.com
#可设置定时任务定时同步时间
[root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# vim /etc/modules-load.d/k8s.conf
overlay
br_netfilter
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# modprobe overlay
[root@k8s-master ~]# lsmod | grep br_netfilter
#配置 k8s 网络配置
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf <
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#这个命令的作用是应用 k8s.conf 文件中的内核参数设置,并且开启网络桥接的防火墙功能。其中 k8s.conf 文件中的内容包括以下三个参数设置:
#net.bridge.bridge-nf-call-iptables = 1 表示开启防火墙功能。
#net.bridge.bridge-nf-call-ip6tables = 1 表示开启 IPV6 的防火墙功能。
#net.ipv4.ip_forward = 1 表示开启 IP 转发功能。
[root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
#重新加载内核参数配置文件,以确保这些设置生效。
[root@k8s-master ~]# sysctl --system
IPVS(IP Virtual Server)是 Linux 内核自带的一种负载均衡技术,它可以将请求分配到一组后端服务器(也称为“真实服务器”),从而提高系统的性能和可靠性。IPVS 采用网络地址转换(NAT)或直接路由模式来处理请求,支持 LVS(Linux Virtual Server)协议、TCP、UDP 和 SCTP 等多种协议。IPVS 能够自动检测后端服务器的状态,如果某个服务器出现故障,IPVS 会自动将其从服务器池中删除,并将请求转发给其他健康的服务器。
IPVS 主要由以下三个组件组成:
- 调度器(Scheduler):负责根据特定的算法(如轮询、加权轮询、最小连接数等)将请求转发到后端服务器。
- 控制器(IPVS Control):负责配置和管理 IPVS 的规则和策略。
- 真实服务器(Real Server):负责处理来自客户端的请求。
IPVS 的优点是实现简单、性能高、可靠性强,并且支持多种负载均衡算法,可以灵活地根据实际需求进行配置和管理。
[root@k8s-master ~]# yum -y install ipset ipvsadm
[root@k8s-master ~]# vim /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
[root@k8s-master ~]# yum install containerd -y
[root@k8s-master ~]# systemctl start containerd && systemctl enable containerd
#重要!!!!
#不知道是哪个小版本之后配置文件进行了调整,config.toml配置略有出入,需要不同替换
#可导致 kubeadm init 时报错,kubelet提示:"Error getting node" err="node \"k8s-master\" not found"
#起码1.6.6
[root@k8s-master ~]# mkdir -p /etc/containerd
[root@k8s-master ~]# containerd config default > /etc/containerd/config.toml
#老版containerd
#替换配置文件
#替换grc.io为阿里源
[root@@k8s-master ~]# sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
#替换containerd驱动为SystemCgroup,与kubelet保持一致
[root@@k8s-master ~]# sed -i '/containerd.runtimes.runc.options/a\ \ \ \ \ \ \ \ \ \ \ \ SystemdCgroup = true' /etc/containerd/config.toml
#镜像仓库地址
[root@@k8s-master ~]# sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g" /etc/containerd/config.toml
#重启生效配置
[root@@k8s-master ~]# systemctl restart containerd
#新版containerd
#命令根据https://blog.csdn.net/zhh763984017/article/details/126714567 修改,增加registry.k8s.io替换
[root@@k8s-master ~]# mkdir -p /etc/containerd && \
containerd config default > /etc/containerd/config.toml && \
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml && \
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml && \
sed -i 's#registry.k8s.io/pause:3.6#registry.aliyuncs.com/k8sxio/pause:3.6#g' /etc/containerd/config.toml && \
sed -i '/registry.mirrors]/a\ \ \ \ \ \ \ \ [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]' /etc/containerd/config.toml && \
sed -i '/registry.mirrors."docker.io"]/a\ \ \ \ \ \ \ \ \ \ endpoint = ["http://hub-mirror.c.163.com"]' /etc/containerd/config.toml && \
sed -i '/hub-mirror.c.163.com"]/a\ \ \ \ \ \ \ \ [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]' /etc/containerd/config.toml && \
sed -i '/"k8s.gcr.io"]/a\ \ \ \ \ \ \ \ \ \ endpoint = ["http://registry.aliyuncs.com/google_containers"]' /etc/containerd/config.toml && \
echo "===========restart containerd to reload config===========" && \
systemctl restart containerd
重启报错
可能为SystemdCgroup = true
重复添加,手动修改/etc/containerd/config.toml
文件即可。
#查看可安装的kubernetes 的版本
[root@k8s-master ~]# yum list kubelet --showduplicates | sort -r
#安装
[root@k8s-master ~]# yum install -y kubelet-1.27.0 kubeadm-1.27.0 kubectl-1.27.0 --disableexcludes=kubernetes
#配置开机自启
[root@k8s-master ~]# systemctl enable --now kubelet
#Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
#kubelet: 安装在集群所有节点上,用于启动 Pod 的
#kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
#查看版本
[root@k8s-master ~]# kubeadm version
#指定容器运行时为containerd
[root@k8s-master ~]# crictl config runtime-endpoint /run/containerd/containerd.sock
#生成文件
[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm.yaml
[root@k8s-master ~]# vim kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.2.40 #master节点ip
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master #主节点名
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #替换镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #pod 网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
--- #新增
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
#初始化apiserver等依赖镜像
[root@k8s-master ~]# kubeadm config images pull --config=kubeadm.yaml
#初始化k8s 服务器起码2C,否则报错
[root@k8s-master ~]# kubeadm init --config kubeadm.yaml | tee kubeadm-init.log
kubeadm join 192.168.2.40:6443 --token abcdef.0123456789abcdef *
** --discovery-token-ca-cert-hash sha256:09bdee3b5da241080291a2aed41939a801dc5932a2caedf90b1fac8831450acb*
**配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证 书对 k8s 集群进行管理 **
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
检查节点
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 8m20s v1.27.0
#仅node节点上执行
[root@k8s-master ~]# mkdir -p $HOME/.kube
#同步.kube/config文件夹
[root@k8s-master ~]# xsync ~/.kube/config
#此命令为init之后打印在控制台中
#仅node节点执行
[root@k8s-node01 ~]# kubeadm join 192.168.2.40:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:09bdee3b5da241080291a2aed41939a801dc5932a2caedf90b1fac8831450acb
#执行后可能存在以下报错,因为存在cri-docker与containerd两个容器运行时
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
#手动指定容器运行时为containerd,其他情况同理
[root@k8s-node01 ~]# kubeadm join 192.168.2.40:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:09bdee3b5da241080291a2aed41939a801dc5932a2caedf90b1fac8831450acb \
--cri-socket=unix:///var/run/containerd/containerd.sock
#未记录可执行此命令重新打印
[root@k8s-master ~]# kubeadm token create --print-join-command
![image.png](https://img-blog.csdnimg.cn/img_convert/be716852f8c9ea318c464ab601b4ee0e.png
#查看节点标签
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master NotReady control-plane 53m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node01 NotReady <none> 32m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux
k8s-node02 NotReady <none> 38m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux
k8s-node03 NotReady <none> 38m v1.27.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node03,kubernetes.io/os=linux
#设置标签
[root@k8s-master ~]# kubectl label nodes k8s-node01 node-role.kubernetes.io/work=work
[root@k8s-master ~]# kubectl label nodes k8s-node02 node-role.kubernetes.io/work=work
[root@k8s-master ~]# kubectl label nodes k8s-node03 node-role.kubernetes.io/work=work
试了几个在线安装的都挂了,只能离线安装。
#解压calico.tar
[root@k8s-master ~]# ctr images import calico.tar.gz
#使用kubectl安装kube
[root@k8s-master ~]# kubectl apply -f calico.yaml
#方法1:下载flannel
[root@k8s-master ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#安装
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
#方法2:很有可能国内网络访问不到这个资源,你可以网上找找国内的源安装 flannel
[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#如果上面的插件安装失败,可以选用 Weave,下面的命令二选一就可以了。
[root@k8s-master ~]# kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
[root@k8s-master ~]# kubectl apply -f http://static.corecore.cn/weave.v2.8.1.yaml
稍等一会查询,STATUS均为Ready
![image.png](https://img-blog.csdnimg.cn/img_convert/36fa7854dec0c3c4916090528434ae46.png
![image.png](https://img-blog.csdnimg.cn/img_convert/29abc78aa2e92489cdf08a30b772574b.png
#解压
[root@k8s-master ~]# ctr images import busybox-1-28.tar.gz
#运行pod
[root@k8s-master ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
#检查域名解析是否正常
/ # ping www.baidu.com
PING www.baidu.com (180.101.50.242): 56 data bytes
64 bytes from 180.101.50.242: seq=0 ttl=52 time=14.036 ms
#检查coredns是否正常
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
#退出pod
/ # exit
pod "busybox" deleted