节点 |
IP地址 |
性能分配 |
k8s-master |
192.168.56.10 |
处理器内核 2 内存 4G |
k8s-node1 |
192.168.56.11 |
处理器内核 2 内存 4G |
k8s-node2 |
192.168.56.12 |
处理器内核 2 内存 4G |
设置主机名称
Master
hostnamectl set-hostname k8s-master
Node
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
配置hosts 解析
cat <> /etc/hosts
192.168.56.10 k8s-master
192.168.56.11 k8s-node1
192.168.56.12 k8s-node2
EOF
cat /etc/hosts
安装依赖
yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
设置防火墙为 Iptables 并设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭 SELINUX
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整内核参数
modprobe br_netfilter
cat < kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
若报错:sysctl: cannot stat /proc/sys/net/ipv4/tcp_tw_recycle: No such file or directory
则:
查看conntrack是否加载
lsmod |grep conntrack
加载conntrack
modprobe ip_conntrack
重新执行
sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
# 设置系统时区为 中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart crond
设置 rsyslogd 和 systemd journald
# 持久化保存日志的目录
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <
kube-proxy开启ipvs的前置条件
cat < /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
# 如果上述出现如下错误
modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-372.9.1.el8.x86_64
# 因为内核版本较高的原因尝试下方
cat < /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e -ip_vs -e nf_conntrack
卸载原有的docker
停止服务
systemctl stop docker
删除文件
rm -rf /etc/docker
rm -rf /run/docker
rm -rf /var/lib/dockershim
rm -rf /var/lib/docker
列出相关包并移出
yum list installed |grep docker
yum -y remove XXXXXXX
查看docker相关的rpm源文件并删除
rpm -qa |grep docker
yum -y remove XXXXXXX
安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
配置源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
查看版本
yum list docker-ce --showduplicates | sort -r
安装指定版本的server和cli
yum install -y docker-ce-20.10.7-3.el7 docker-ce-cli-20.10.7-3.el7 containerd.io
修改配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://gqs7xcfd.mirror.aliyuncs.com","https://hub-mirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
启动并配置docker服务
systemctl daemon-reload && systemctl enable docker && systemctl start docker
若启动报错
failed to start daemon: error initializing graphdriver: overlay2: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to enable d_type support.
则删除daemon.json中的overlay2驱动配置;或者格式化文件系统。【建议安装前先使用xfs_info查看挂载点文件系统格式化情况,ftype=0情况下,kubeadm init时会报错】
格式化文件系统
查看文件系统信息
df -T
xfs_info [挂载点:/dev/sda1或者/dev/mapper/centos-root]
结果:
meta-data=/dev/mapper/centos-root isize=256 agcount=16, agsize=3276800 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=50759680, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=6400, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
格式化
可以看到,ftype=0,所以在启动 Docker 的时候报错了。
格式化 xfs 文件系统,将 ftype 设置为 1,【需卸载挂载的文件系统】
mkfs.xfs -n ftype=1 /dev/mapper/centos-root
安装kubeadm,kubelet和kubectl
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
### 列出所有版本 建议安装 1.23.6
yum list kubelet --showduplicates
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet
初始化 kubernetes master/node
[master]上执行
kubeadm init
--kubernetes-version 1.23.6 \
--apiserver-advertise-address=192.168.56.10 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers
使用下面命令
kubeadm init --kubernetes-version 1.23.6 --apiserver-advertise-address=192.168.56.10 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers
配置集群相关信息
安装完成后终端会显示
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.56.10:6443 --token nd0hv2.mjlfe5rlu026nrtf \
--discovery-token-ca-cert-hash sha256:480eea593d48e4058204741b27fe936b0a5c7e2896fa3f7cb09c20ea9f8a5157
配置网络插件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yaml
查看服务状态
kubectl get pods -n kube-system -o wide kubectl get nodes
安装ingress-nginx插件
kubectl apply -f ingress-controller.yaml
测试ingress-nginx插件工作是否成功,部署demo(官网测试案例)
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
# nginx.mercator.com 自定义域名
kubectl create ingress demo-localhost --class=nginx --rule="nginx.mercator.com/*=demo:80"
kubectl get pod,svc,ingress
部署成功后,访问 curl nginx.mercator.com,正常情况下会显示It Works!
访问成功的前提需要:
1.在虚拟机上配置域名解析:127.0.0.1 nginx.mercator.com
2.在本地宿主机上配置解析:宿主机ip nginx.mercator.com