文章整体分为两部分
演示环境
CPU | i7-8750H |
---|---|
内存 | 16G |
磁盘 | 128固态+1T机械(虚拟机等软件均运行在机械硬盘) |
虚拟化工具 | VMware Workstation Pro15 |
Shell工具 | SecureCRTPortable/SecureFXPortable |
系统/版本号 | Windows家庭中文版/2004 |
虚拟机规划(机器资源有限,如果你的配置足够,可以给虚拟机分配更多的资源)
IP | 角色 | 系统版本 | 配置 | |
---|---|---|---|---|
192.168.91.221 | Master | Centos7.6 | 1P/2C/2G/50G | |
192.168.91.132 | Master | Centos7.6 | 1P/2C/2G/50G | |
192.168.91.133 | Master | Centos7.6 | 1P/2C/2G/50G | |
192.168.91.163 | Node | Centos7.6 | 2P/2C/4G/70G | |
192.168.91.220 | VIP |
注:
配置信息说明:1P/2C/2G/40G 表示 处理器数量1,核心数为2,内存2G,40G磁盘
Centos系统版本尽量与我保持一致,或不低于Centos7.5
K8S高可用 所有的组件不能出现单节点故障
Kubeadm设置K8S HA集群的两种方式:
两种方式的对比参考文档:
https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/ha-topology/
拓扑图:
很多人认为K8S高可用的集群Master必须至少为三节点且为奇数,并不是这样的,高可用至少为两节点且偶数节点也是可行的。
https://www.kubernetes.org.cn/7569.html
https://cloud.tencent.com/developer/article/1551687
负载请求转发,不存在单个节点就无法工作的情况,可以水平扩展。
一个master集群中这两个组件只会有一个节点处于工作状态,当工作的节点不可用时,会触发其他节点尝试将自己设置为工作节点的情况。
Master的各组件的最小可用节点数:1 只要有一个节点正常,就可以用提供服务。
ETCD 组件需要leader选举,采用Raft算法,遵循过半原则,最小可用节点数:(n/2)+1。
节点总数 | 最小存活 | 失败容忍 |
---|---|---|
1 | 1 | 0 |
2 | 2 | 0 |
3 | 2 | 1 |
4 | 3 | 1 |
5 | 3 | 1 |
6 | 4 | 2 |
n | (n/2)+1向下取整 | n-(n/2)+1 |
当ETCD节点数为1的时候,就会出现点单故障。为两个节点时,任意节点不可用都会造成ETCD集群不可用,所以ETCD高可用最少为三节点。而ETCD为四节点的时候失败容忍度与三节点是相同的,所以搭建四节点和三节点从可用性方面来看是一样的,这样四节点就没有必要了,后续节点增加均是以奇数位提高可用性级别。所以ETCD集群个数建议为奇数位,且最少位三节点。
其中HAProxy将会与Master的APIServer绑定。
Centos7 镜像下载地址:http://isoredirect.centos.org/centos/7/isos/x86_64/
将离线安装包上传至其中一台服务器,其他服务器通过ssh拷贝文件
scp -r haproxy.cfg ipvs.conf k8s-master1.19.4/ kernel/ softrpm/ keepalived.conf 192.168.91.132:/root/
scp -r haproxy.cfg ipvs.conf k8s-master1.19.4/ kernel/ softrpm/ keepalived.conf 192.168.91.133:/root/
scp -r ipvs.conf kernel/ softrpm/ k8s-master1.19.4/ 192.168.91.163:/root/
所有服务器执行
rpm -ivh k8s-master1.19.4/* kernel/* softrpm/* --force
这里使用m1的网卡信息作为示例
[root@m1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens32"
UUID="a6690222-c1e5-45cf-93e5-2cfa7bf54960"
DEVICE="ens32"
ONBOOT="yes"
IPADDR=192.168.91.221
GATEWAY=192.168.91.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
[root@m1 ~]#
hostnamectl set-hostname m1
hostnamectl set-hostname m2
hostnamectl set-hostname m3
hostnamectl set-hostname node1
192.168.91.221 m1
192.168.91.132 m2
192.168.91.133 m3
192.168.91.163 node1
192.168.91.220 k8s.vip
#!/bin/bash
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#关闭swap分区
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab
#关闭ssh DNS解析
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
systemctl restart sshd
#清理iptables
iptables -P INPUT ACCEPT && iptables -P FORWARD ACCEPT && iptables -F && iptables -L -n && ipvsadm --clear
#关闭selinux
setenforce 0 && sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
#关闭NetworkManager
systemctl stop NetworkManager && systemctl disable NetworkManager
#设置/etc/sysctl.conf
cat <<EOF > /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
net.ipv4.conf.all.forwarding=1
net.ipv4.neigh.default.gc_thresh1=4096
net.ipv4.neigh.default.gc_thresh2=6144
net.ipv4.neigh.default.gc_thresh3=8192
net.ipv4.neigh.default.gc_interval=60
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_intvl=30
net.ipv4.tcp_keepalive_probes=10
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
# 参考 https://github.com/prometheus/node_exporter#disabled-by-default
kernel.perf_event_paranoid=-1
#sysctls for k8s node config
net.ipv4.tcp_slow_start_after_idle=0
net.core.rmem_max=16777216
fs.inotify.max_user_watches=524288
kernel.softlockup_all_cpu_backtrace=1
kernel.softlockup_panic=0
kernel.watchdog_thresh=30
fs.file-max=2097152
fs.inotify.max_user_instances=8192
fs.inotify.max_queued_events=16384
vm.max_map_count=262144
fs.may_detach_mounts=1
net.core.netdev_max_backlog=16384
net.ipv4.tcp_wmem=4096 12582912 16777216
net.core.wmem_max=16777216
net.core.somaxconn=32768
net.ipv4.ip_forward=1
net.ipv4.tcp_max_syn_backlog=8096
net.ipv4.tcp_rmem=4096 12582912 16777216
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.lo.disable_ipv6=1
kernel.yama.ptrace_scope=0
vm.swappiness=0
# 可以控制core文件的文件名中是否添加pid作为扩展。
kernel.core_uses_pid=1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route=0
net.ipv4.conf.all.accept_source_route=0
# Promote secondary addresses when the primary address is removed
net.ipv4.conf.default.promote_secondaries=1
net.ipv4.conf.all.promote_secondaries=1
# Enable hard and soft link protection
fs.protected_hardlinks=1
fs.protected_symlinks=1
# 源路由验证
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
# see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets=5000
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_synack_retries=2
kernel.sysrq=1
vm.overcommit_memory=1
vm.panic_on_oom=0
vm.max_map_count=262144
EOF
#ipvs
cat <<EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
#limits
cat <<EOF > /etc/security/limits.d/kubernetes.conf
* soft nproc 131072
* hard nproc 131072
* soft nofile 131072
* hard nofile 131072
root soft nproc 131072
root hard nproc 131072
root soft nofile 131072
root hard nofile 131072
EOF
#加载br_netfilter
modprobe br_netfilter && lsmod |grep br_netfilter
systemctl enable --now systemd-modules-load
sysctl -p
systemctl enable --now kubelet
#内核设置
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
sed -i 's/GRUB_DEFAULT=saved/GRUB_DEFAULT=0/g' /etc/default/grub
sleep 2
reboot
[root@node1 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 155648 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
[root@node1 ~]#
设置成功
将docker、haror安装包上传至服务器中。docker需要为每台服务器安装,Harbor我们选择安装在Node1
我们将Docker分发给每个节点
Harbor放置到Node1
scp -r docker/ 192.168.91.132:/root/
scp -r docker/ 192.168.91.133:/root/
scp -r docker/ 192.168.91.163:/root/
scp -r harbor1.5/ 192.168.91.163:/root/
#解压压缩包
tar -zxvf docker-19.03.9.tgz
#将解压的内容全部复制到/usr/bin
cp docker/* /usr/bin
#将docker.service 复制到/etc/systemd/system/
cp docker.service /etc/systemd/system/
------------------------------------------------------------
[root@m1 docker]# ll docker
总用量 195504
-rwxr-xr-x 1 1000 1000 32751272 5月 15 2020 containerd
-rwxr-xr-x 1 1000 1000 6012928 5月 15 2020 containerd-shim
-rwxr-xr-x 1 1000 1000 18194536 5月 15 2020 ctr
-rwxr-xr-x 1 1000 1000 61113382 5月 15 2020 docker
-rwxr-xr-x 1 1000 1000 68874208 5月 15 2020 dockerd
-rwxr-xr-x 1 1000 1000 708616 5月 15 2020 docker-init
-rwxr-xr-x 1 1000 1000 2928514 5月 15 2020 docker-proxy
-rwxr-xr-x 1 1000 1000 9600696 5月 15 2020 runc
[root@m1 docker]# ll
总用量 59312
drwxrwxr-x 2 1000 1000 138 5月 15 2020 docker
-rw-r--r-- 1 root root 60730088 10月 25 21:13 docker-19.03.9.tgz
-rw-r--r-- 1 root root 1146 8月 27 2019 docker.service
[root@m1 docker]# cp docker/** /usr/bin/
[root@m1 docker]# cp docker.service /etc/systemd/system/
------------------------------------------------------------
每个节点重复操作
启动docker
------------------------------------------------------------
[root@m1 docker]# systemctl start docker && systemctl enable --now docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /etc/systemd/system/docker.service.
cat < /etc/docker/daemon.json
{
"insecure-registries": ["192.168.91.163:88"],
"registry-mirrors": [
"https://fz5yth0r.mirror.aliyuncs.com"
],
"max-concurrent-downloads": 15,
"max-concurrent-uploads": 15,
"oom-score-adjust": -1000,
"graph": "/var/lib/docker",
"exec-opts": ["native.cgroupdriver=systemd"],
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "3"
}
}
EOF
systemctl daemon-reload && systemctl restart docker
[root@node1 ~]# ll
总用量 20
-rw-------. 1 root root 1253 11月 29 11:52 anaconda-ks.cfg
drwxr-xr-x 2 root root 54 11月 29 14:03 docker
-rwxr-xr-x. 1 root root 4053 11月 29 13:47 envMake.sh
drwxr-xr-x 4 root root 251 11月 29 14:05 harbor1.5
-rw-r--r--. 1 root root 108 11月 29 12:49 ipvs.conf
drwxr-xr-x. 2 root root 4096 11月 29 12:49 kernel
drwxr-xr-x. 2 root root 4096 11月 29 12:49 softrpm
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# cd harbor1.5/
[root@node1 harbor1.5]# ls
common docker-compose.yml install.sh
docker-compose ha LICENSE
docker-compose.clair.yml harbor.cfg NOTICE
docker-compose.notary.yml harbor.v1.5.0.tar.gz prepare
[root@node1 harbor1.5]#
[root@node1 harbor1.5]# vi harbor.cfg
## Configuration file of Harbor
#This attribute is for migrator to detect the version of the .cfg file, DO
NOT MODIFY!
_version = 1.5.0
#The IP address or hostname to access admin UI and registry service.
#DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by
external clients.
hostname = 192.168.91.163
hostname设置为当前机器IP
[root@node1 harbor1.5]# vi common/templates/registry/config.yml
version: 0.1
log:
level: info
fields:
service: registry
storage:
cache:
layerinfo: inmemory
$storage_provider_info
maintenance:
uploadpurging:
enabled: false
delete:
enabled: true
http:
addr: :5000
secret: placeholder
debug:
addr: localhost:5001
auth:
token:
issuer: harbor-token-issuer
realm: $public_url:88/service/token
rootcertbundle: /etc/registry/root.crt
service: harbor-registry
notifications:
endpoints:
- name: harbor
disabled: false
url: $ui_url/service/notifications
timeout: 3000ms
threshold: 5
backoff: 1s
[root@node1 harbor1.5]# chmod +x * && cp docker-compose /usr/bin/
[root@node1 harbor1.5]# ./install.sh
[Step 0]: checking installation environment ...
Note: docker version: 19.03.9
Note: docker-compose version: 1.25.0
[Step 1]: loading Harbor images ...
52ef9064d2e4: Loading layer [==================================================>] 135.9MB/135.9MB
c169f7c7a5ff: Loading layer [==================================================>]
#镜像加载部分略过
[Step 2]: preparing environment ...
Clearing the configuration file: ./common/config/adminserver/env
Clearing the configuration file: ./common/config/db/env
Clearing the configuration file: ./common/config/jobservice/config.yml
Clearing the configuration file: ./common/config/jobservice/env
Clearing the configuration file: ./common/config/log/logrotate.conf
Clearing the configuration file: ./common/config/nginx/nginx.conf
Clearing the configuration file: ./common/config/registry/config.yml
Clearing the configuration file: ./common/config/registry/root.crt
Clearing the configuration file: ./common/config/ui/app.conf
Clearing the configuration file: ./common/config/ui/env
Clearing the configuration file: ./common/config/ui/private_key.pem
Generated and saved secret to file: /data/secretkey
Generated configuration file: ./common/config/nginx/nginx.conf
Generated configuration file: ./common/config/adminserver/env
Generated configuration file: ./common/config/ui/env
Generated configuration file: ./common/config/registry/config.yml
Generated configuration file: ./common/config/db/env
Generated configuration file: ./common/config/jobservice/env
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/log/logrotate.conf
Generated configuration file: ./common/config/jobservice/config.yml
Generated configuration file: ./common/config/ui/app.conf
Generated certificate, key file: ./common/config/ui/private_key.pem, cert file: ./common/config/registry/root.crt
The configuration files are ready, please use docker-compose to start the service.
[Step 3]: checking existing instance of Harbor ...
[Step 4]: starting Harbor ...
Creating network "harbor15_harbor" with the default driver
Creating harbor-log ... done
Creating registry ... done
Creating redis ... done
Creating harbor-db ... done
Creating harbor-adminserver ... done
Creating harbor-ui ... done
Creating harbor-jobservice ... done
Creating nginx ... done
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at http://192.168.91.163.
For more details, please visit https://github.com/vmware/harbor .
[root@m1 ~]# docker login 192.168.91.163:88
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@m1 ~]#
[root@m1 ~]# cd kubeadm/
[root@m1 kubeadm]# ls
calico.yaml haproxy.cfg k8simages keepalived.conf kubeadm.yaml
envMake.sh ipvs.conf k8s-master1.19.4 kernel softrpm
[root@m1 kubeadm]#
[root@m1 kubeadm]# ll
总用量 220
-rw-r--r--. 1 root root 187871 11月 26 17:41 calico.yaml
-rwxr-xr-x. 1 root root 3881 11月 29 13:37 envMake.sh
-rw-r--r--. 1 root root 1617 11月 26 17:24 haproxy.cfg
-rw-r--r--. 1 root root 108 11月 26 15:01 ipvs.conf
drwxr-xr-x. 2 root root 4096 11月 29 12:45 k8simages
drwxr-xr-x. 2 root root 4096 11月 29 12:45 k8s-master1.19.4
-rw-r--r--. 1 root root 667 11月 26 12:35 keepalived.conf
drwxr-xr-x. 2 root root 4096 11月 29 12:45 kernel
-rw-r--r--. 1 root root 1484 11月 26 17:28 kubeadm.yaml
drwxr-xr-x. 2 root root 4096 11月 29 12:45 softrpm
[root@m1 kubeadm]#
#!/bin/bash
#你的harbor地址 带上端口
harbor_address=192.168.91.163:88
#镜像存放的目录
imagesDir=/root/kubeadm/k8simages
for image in `ls $imagesDir`;do
docker load -i $imagesDir/$image
done
for i in `docker images | awk 'NR>1{print $1":"$2}'`;do
imageTag=$harbor_address/$i
docker tag $i $imageTag
docker push $imageTag
done
[root@m1 ~]# cd kubeadm/
[root@m1 kubeadm]# vi keepalived.conf
global_defs {
notification_email {
root@localhost
}
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32 #你的网卡名称
virtual_router_id 51
priority 250
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
unicast_src_ip 192.168.91.221 #本机IP
unicast_peer { #另外两个Master节点IP
192.168.91.132
192.168.91.133
}
virtual_ipaddress {
192.168.91.220 #虚拟IP
}
}
#备份原来的文件
[root@m1 kubeadm]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak && mv keepalived.conf /etc/keepalived/
#启动Keepalived
[root@m1 kubeadm]# systemctl restart keepalived.service && systemctl enable --now keepalived.service
#查看虚拟IP
[root@m1 kubeadm]# ip a | grep ens32
2: ens32: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
inet 192.168.91.221/24 brd 192.168.91.255 scope global noprefixroute ens32
inet 192.168.91.220/32 scope global ens32
[root@m2 ~]# vi keepalived.conf
global_defs {
notification_email {
root@localhost
global_defs {
notification_email {
root@localhost
}
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP #表示从节点
interface ens32 #当前机器网卡名称
virtual_router_id 51
priority 200 #比第一台master少50
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
unicast_src_ip 192.168.91.132 #本机IP
unicast_peer { #另外两台IP
192.168.91.221
192.168.91.133
}
virtual_ipaddress {
192.168.91.220 #VIP
}
}
[root@m2 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak && mv keepalived.conf /etc/keepalived/ && systemctl start keepalived && systemctl enable --now keepalived
[root@m3 ~]# vi keepalived.conf
global_defs {
notification_email {
root@localhost
}
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP #表示从节点
interface ens32 #当前机器网卡名称
virtual_router_id 51
priority 150 #比第二个Master少50
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
unicast_src_ip 192.168.91.133 #本机IP
unicast_peer { #其余MasterIP
192.168.91.221
192.168.91.132
}
virtual_ipaddress {
192.168.91.220 #VIP
}
}
[root@m3 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak && mv keepalived.conf /etc/keepalived/ && systemctl start keepalived.service && systemctl enable --now keepalived.service
[root@m1 kubeadm]# vi haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 15s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats #http://ip:8006/stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend kubernetes
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-apiServer
backend k8s-apiServer
mode tcp
option tcplog
option httpchk GET /healthz
http-check expect string ok
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256
weight 100
#修改每台机器的hostname 和 ip ,端口为Apiserver端口无需改动
server m1 192.168.91.221:6443 check check-ssl verify none
server m2 192.168.91.132:6443 check check-ssl verify none
server m3 192.168.91.133:6443 check check-ssl verify none
[root@m1 kubeadm]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak && mv haproxy.cfg /etc/haproxy/ && systemctl restart haproxy.service && systemctl enable --now haproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service
[root@m1 kubeadm]#
[root@m1 kubeadm]# vi kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
kubeletExtraArgs:
cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
controlPlaneEndpoint: "k8s.vip:8443"
#ip端口修改为harbor的
imageRepository: 192.168.91.163:88/k8s.gcr.io
kubernetesVersion: v1.19.4
networking: #设置集群网络
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.68.0.0/16
apiServer:
certSANs:
- "192.168.91.221"
- "192.168.91.133"
- "192.168.91.132"
- "m1"
- "m2"
- "m3"
- "192.168.91.220"
- "k8s.vip"
etcd:
local:
dataDir: "/var/lib/etcd"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
cpu: "0.25"
memory: 128Mi
imageGCHighThresholdPercent: 85 #磁盘使用率超过 出发GC
imageGCLowThresholdPercent: 80 #磁盘使用率低于 停止GC
imageMinimumGCAge: 2m0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
failSwapOn: false
clusterDomain: cluster.local
rotateCertificates: true #开启证书轮询
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
mode: "ipvs"
#iptables:
# masqueradeAll: false
# masqueradeBit: null
# minSyncPeriod: 0s
# syncPeriod: 0s
ipvs:
# 如果node提供lvs服务,排除以下CIDR 不受kube-proxy管理,避免刷掉lvs规则
excludeCIDRs: [1.1.1.0/24,2.2.2.0/24]
minSyncPeriod: 1s
scheduler: "wrr"
syncPeriod: 10s
[root@m1 kubeadm]# kubeadm init --config=kubeadm.yaml
W1206 19:00:17.887611 8000 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1 m2 m3] and IPs [10.96.0.1 192.168.91.221 192.168.91.133 192.168.91.132 192.168.91.220]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m1] and IPs [192.168.91.221 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m1] and IPs [192.168.91.221 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.040448 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: eiqtnz.nxpb6y5e4x6ltzfk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
--discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
--discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99
[root@m1 kubeadm]#
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@m1 kubeadm]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m1 NotReady master 81s v1.19.4
[root@m1 kubeadm]#
[root@m1 kubeadm]# ls
busytest.yaml calico.yaml imagepush.sh k8simages kernel softrpm
calicoimage.sh envMake.sh ipvs.conf k8s-master1.19.4 kubeadm.yaml
[root@m1 kubeadm]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@m1 kubeadm]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m1 Ready master 4m23s v1.19.4
[root@m1 kubeadm]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml
[root@m1 kubeadm]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@m1 kubeadm]# systemctl restart kubelet
将Kubeadm输出的加入Master命令在其余两个Master上执行
[root@m2 ~]# kubeadm join k8s.vip:8443 --token 50hra4.a5du6uhvu3th535o --discovery-token-ca-cert-hash sha256:d5b613e37998fd3e239d9f2667e03913b7f351b05bac0179f20cded741d8b0d4 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight:
One or more conditions for hosting a new control plane instance is not satisfied.
failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
To see the stack trace of this error execute with --v=5 or higher
报错了 找不到ca证书,我们将第一台master的复制过来
这是新节点复制之前我们还需要创建证书的默认目录
[root@m2 kubernetes]# mkdir -p /etc/kubernetes/pki/etcd/
[root@m1 pki]# scp ca.* front-proxy-ca.* sa.* 192.168.91.132:/etc/kubernetes/pki/
[email protected]'s password:
ca.crt 100% 1066 1.1MB/s 00:00
ca.key 100% 1675 1.5MB/s 00:00
front-proxy-ca.crt 100% 1078 1.1MB/s 00:00
front-proxy-ca.key 100% 1675 1.7MB/s 00:00
ca.crt 100% 1058 1.2MB/s 00:00
ca.key 100% 1679 1.8MB/s 00:00
sa.key 100% 1675 2.1MB/s 00:00
sa.pub 100% 451 571.2KB/s 00:00
[root@m1 pki]# scp etcd/ca.* 192.168.91.132:/etc/kubernetes/pki/etcd/
[email protected]'s password:
ca.crt 100% 1058 1.0MB/s 00:00
ca.key 100% 1679 2.0MB/s 00:00
[root@m1 pki]#
[root@m2 ~]# kubeadm join k8s.vip:8443 --token 50hra4.a5du6uhvu3th535o --discovery-token-ca-cert-hash sha256:d5b613e37998fd3e239d9f2667e03913b7f351b05bac0179f20cded741d8b0d4 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m2] and IPs [10.96.0.1 192.168.91.132]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m2] and IPs [192.168.91.132 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m2] and IPs [192.168.91.132 127.0.0.1 ::1]
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node m2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@m2 ~]# mkdir -p $HOME/.kube
[root@m2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@m2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@m2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m1 Ready master 60m v1.19.4
m2 Ready master 33s v1.19.4
[root@m2 ~]#
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml
[root@m3 ~]# mkdir -p /etc/kubernetes/pki/etcd/
[root@m1 pki]# scp ca.* front-proxy-ca.* sa.* 192.168.91.133:/etc/kubernetes/pki/
[email protected]'s password:
ca.crt 100% 1066 1.2MB/s 00:00
ca.key 100% 1675 2.0MB/s 00:00
front-proxy-ca.crt 100% 1078 1.2MB/s 00:00
front-proxy-ca.key 100% 1675 1.9MB/s 00:00
sa.key 100% 1675 1.3MB/s 00:00
sa.pub 100% 451 460.2KB/s 00:00
[root@m1 pki]# scp etcd/ca.* 192.168.91.133:/etc/kubernetes/pki/etcd/
[email protected]'s password:
ca.crt 100% 1058 995.1KB/s 00:00
ca.key 100% 1679 2.0MB/s 00:00
[root@m1 pki]#
[root@m3 ~]# kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s.vip kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local m1 m2 m3] and IPs [10.96.0.1 192.168.91.133 192.168.91.221 192.168.91.132 192.168.91.220]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost m3] and IPs [192.168.91.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost m3] and IPs [192.168.91.133 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[kubelet-check] Initial timeout of 40s passed.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node m3 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m3 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
[root@m3 ~]#
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-scheduler.yaml
[root@m1 kubeadm]# sed -i 's/- --port=0/#- --port=0/g' /etc/kubernetes/manifests/kube-controller-manager.yaml
[root@node1 ~]# kubeadm join k8s.vip:8443 --token eiqtnz.nxpb6y5e4x6ltzfk \
> --discovery-token-ca-cert-hash sha256:edc5b4b19bb03d4241d856b57958fd30402d56c9ea4dd6ddeaad9c0f471b2c99
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@node1 ~]#
[root@m1 pki]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m1 Ready master 31m v1.19.4
m2 Ready master 12m v1.19.4
m3 Ready master 9m1s v1.19.4
node1 Ready 6m32s v1.19.4
[root@m1 pki]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-5dc87d545c-v6jlf 1/1 Running 0 29m
kube-system calico-node-fnv6p 1/1 Running 0 6m33s
kube-system calico-node-jjhwm 1/1 Running 0 29m
kube-system calico-node-lczjb 1/1 Running 0 9m2s
kube-system calico-node-pzg4v 1/1 Running 0 12m
kube-system coredns-5b46ccdfcd-6zzv7 1/1 Running 0 31m
kube-system coredns-5b46ccdfcd-xt8vw 1/1 Running 0 31m
kube-system etcd-m1 1/1 Running 0 31m
kube-system etcd-m2 1/1 Running 0 10m
kube-system etcd-m3 1/1 Running 0 8m58s
kube-system kube-apiserver-m1 1/1 Running 0 31m
kube-system kube-apiserver-m2 1/1 Running 0 12m
kube-system kube-apiserver-m3 1/1 Running 0 9m1s
kube-system kube-controller-manager-m1 1/1 Running 1 25m
kube-system kube-controller-manager-m2 1/1 Running 0 12m
kube-system kube-controller-manager-m3 1/1 Running 0 7m53s
kube-system kube-proxy-58xnn 1/1 Running 0 12m
kube-system kube-proxy-5fwkf 1/1 Running 0 31m
kube-system kube-proxy-5m6pw 1/1 Running 0 6m33s
kube-system kube-proxy-knwwx 1/1 Running 0 9m2s
kube-system kube-scheduler-m1 1/1 Running 1 25m
kube-system kube-scheduler-m2 1/1 Running 0 12m
kube-system kube-scheduler-m3 1/1 Running 0 7m58s
[root@m1 pki]#
如果你在加入新的mater集群之后又通过kubectl将新加入的master删除,且这时集群只剩余单个节点。这时原先的Master将无法工作,因为集群还记录了你新加入的节点的etcd信息,etcd无法工作,查看日志etcd一直在检查删除节点的etcd,容器不断重启,此时APIServer也将无法工作。kubelet命令等都无法使用。
解决方法:将etcd改为单节点集群
etcd.yaml command命令加入以下内容
- --initial-cluster-state=new
- --force-new-cluster
[root@m1 kubeadm]# vi /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.91.221:2379
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
- etcd
- --advertise-client-urls=https://192.168.91.221:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://192.168.91.221:2380
- --initial-cluster=m1=https://192.168.91.221:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://192.168.91.221:2379
- --listen-metrics-urls=http://127.0.0.1:2381
- --listen-peer-urls=https://192.168.91.221:2380
- --name=m1
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --initial-cluster-state=new
- --force-new-cluster
这里我选择上传到node1节点 root目录下
[root@node1 ~]# ll
总用量 131012
-rw-------. 1 root root 1241 11月 29 11:30 anaconda-ks.cfg
drwxr-xr-x 3 root root 68 11月 29 14:09 docker
drwxr-xr-x 4 root root 251 11月 29 13:58 harbor1.5
drwxr-xr-x. 6 root root 170 11月 29 19:09 kubeadm
-rw-r--r-- 1 root root 99381248 11月 29 20:38 rancherv2.4.8.tgz
[root@node1 ~]# docker load -i rancherv2.4.8.tgz
805802706667: Loading layer [==================================================>] 65.61MB/65.61MB
3fd9df553184: Loading layer [==================================================>] 15.87kB/15.87kB
7a694df0ad6c: Loading layer [==================================================>] 3.072kB/3.072kB
42f2a536483d: Loading layer [==================================================>] 138.6MB/138.6MB
cf2e851ac598: Loading layer [==================================================>] 6.656kB/6.656kB
969fe7f2b01e: Loading layer [==================================================>] 82.71MB/82.71MB
d04e1a085efb: Loading layer [==================================================>] 83.85MB/83.85MB
893359a85ac8: Loading layer [==================================================>] 35.8MB/35.8MB
1a95e9dfb001: Loading layer [==================================================>] 88.59MB/88.59MB
b600fec8ae23: Loading layer [==================================================>] 75.46MB/75.46MB
41e46c093409: Loading layer [==================================================>] 175.7MB/175.7MB
0c18e90730a5: Loading layer [==================================================>] 3.072kB/3.072kB
907bb86f0d6e: Loading layer [==================================================>] 82.05MB/82.05MB
46a7fe6f1101: Loading layer [==================================================>] 117.2MB/117.2MB
1d0e33d7ff7e: Loading layer [==================================================>] 3.072kB/3.072kB
82d416efaa0c: Loading layer [==================================================>] 5.12kB/5.12kB
ebe78adda61d: Loading layer [==================================================>] 44.05MB/44.05MB
7c1635c814db: Loading layer [==================================================>] 3.584kB/3.584kB
629e772b907b: Loading layer [==================================================>] 1.168MB/1.168MB
Loaded image: rancher/rancher:stable
[root@m1 ~]#
[root@node1 ~]# docker run -d --restart=unless-stopped \
-p 81:80 -p 9443:443 \
--privileged \
rancher/rancher:stable
dffcf59585bca2d4673edcde1e0dc480a1b931ed02a1c754ef08cbaf43bf003f
[root@m1 ~]#
[root@node1 k8s-master1.19.4]# docker ps | grep rancher
2d801cbb2651 rancher/rancher:stable "entrypoint.sh" 9 seconds ago Up 5 seconds 0.0.0.0:81->80/tcp, 0.0.0.0:9443->443/tcp loving_swirles
[root@node1 k8s-master1.19.4]#
[root@m3 ~]# curl --insecure -sfL https://192.168.91.163:9443/v3/import/lxj2rrm6pw97v2vzxcxnzm74m8zqpnnrklj2ths66psvhxknbmqnmf.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-f9d649b created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
[root@m3 ~]#
在不同的节点ping PODIP
m1
[root@m1 pki]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.453 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.677 ms
^C
--- 10.68.166.133 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.453/0.565/0.677/0.112 ms
[root@m1 pki]#
[root@m2 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.375 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.268 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=63 time=0.238 ms
^C
--- 10.68.166.133 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2088ms
rtt min/avg/max/mdev = 0.238/0.293/0.375/0.062 ms
[root@m2 ~]#
[root@m3 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=63 time=0.329 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=63 time=0.267 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=63 time=0.360 ms
^C
--- 10.68.166.133 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2071ms
rtt min/avg/max/mdev = 0.267/0.318/0.360/0.043 ms
[root@m3 ~]#
[root@node1 ~]# ping 10.68.166.133
PING 10.68.166.133 (10.68.166.133) 56(84) bytes of data.
64 bytes from 10.68.166.133: icmp_seq=1 ttl=64 time=0.054 ms
64 bytes from 10.68.166.133: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.68.166.133: icmp_seq=3 ttl=64 time=0.039 ms
64 bytes from 10.68.166.133: icmp_seq=4 ttl=64 time=0.034 ms
^C
--- 10.68.166.133 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3091ms
rtt min/avg/max/mdev = 0.034/0.041/0.054/0.009 ms
[root@node1 ~]#
我们使用yum下载离线安装包时,本地环境如果已安装目标软件,则yum不会在进行下载
ipvsadm、ipset、conntrack-tools
所以haproxy、keepalived、psmisc是必须的 其中keepalived监测脚本使用的killall命令需要psmisc
telnet、tcpdump、net-tools、conntrack、socat
lsof、sysstat、htop
crontabs、chrony
bind-utils
jq(json处理程序)、bash-completion(命令自动补全)、net-snmp-agent-libs、net-snmp-libs、wget、unzip、perl(内核依赖)、libseccomp、tree
[root@localhost mnt]# mkdir softrpms
[root@localhost mnt]# cd ..
[root@localhost /]# tree mnt/
mnt/
└── softrpms
1 directory, 0 files
[root@localhost /]#
[root@localhost /]# yum install -y wget
#备份默认的yum源
[root@localhost /]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
#安装阿里yum源
[root@localhost /]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
[root@localhost /]# yum clean all
[root@localhost /]# yum makecache
[root@localhost /]# yum install --downloadonly --downloaddir=/mnt/softrpms/ conntrack-tools psmisc jq socat bash-completion ipset perl ipvsadm conntrack libseccomp net-tools crontabs sysstat unzip bind-utils tcpdump telnet lsof htop wget haproxy keepalived net-snmp-agent-libs net-snmp-libs chrony
[root@localhost /]# ls /mnt/softrpms/
bash-completion-2.1-8.el7.noarch.rpm perl-File-Temp-0.23.01-3.el7.noarch.rpm
bind-export-libs-9.11.4-26.P2.el7_9.2.x86_64.rpm perl-Filter-1.49-3.el7.x86_64.rpm
bind-libs-9.11.4-26.P2.el7_9.2.x86_64.rpm perl-Getopt-Long-2.40-3.el7.noarch.rpm
bind-libs-lite-9.11.4-26.P2.el7_9.2.x86_64.rpm perl-HTTP-Tiny-0.033-3.el7.noarch.rpm
bind-license-9.11.4-26.P2.el7_9.2.noarch.rpm perl-libs-5.16.3-297.el7.x86_64.rpm
bind-utils-9.11.4-26.P2.el7_9.2.x86_64.rpm perl-macros-5.16.3-297.el7.x86_64.rpm
chrony-3.4-1.el7.x86_64.rpm perl-parent-0.225-244.el7.noarch.rpm
dhclient-4.2.5-82.el7.centos.x86_64.rpm perl-PathTools-3.40-5.el7.x86_64.rpm
dhcp-common-4.2.5-82.el7.centos.x86_64.rpm perl-Pod-Escapes-1.04-297.el7.noarch.rpm
dhcp-libs-4.2.5-82.el7.centos.x86_64.rpm perl-podlators-2.5.1-3.el7.noarch.rpm
haproxy-1.5.18-9.el7.x86_64.rpm perl-Pod-Perldoc-3.20-4.el7.noarch.rpm
ipset-7.1-1.el7.x86_64.rpm perl-Pod-Simple-3.28-4.el7.noarch.rpm
ipset-libs-7.1-1.el7.x86_64.rpm perl-Pod-Usage-1.63-3.el7.noarch.rpm
ipvsadm-1.27-8.el7.x86_64.rpm perl-Scalar-List-Utils-1.27-248.el7.x86_64.rpm
keepalived-1.3.5-19.el7.x86_64.rpm perl-Socket-2.010-5.el7.x86_64.rpm
libpcap-1.5.3-12.el7.x86_64.rpm perl-Storable-2.45-3.el7.x86_64.rpm
libseccomp-2.3.1-4.el7.x86_64.rpm perl-Text-ParseWords-3.29-4.el7.noarch.rpm
lm_sensors-libs-3.4.0-8.20160601gitf9185e5.el7.x86_64.rpm perl-threads-1.87-4.el7.x86_64.rpm
lsof-4.87-6.el7.x86_64.rpm perl-threads-shared-1.43-6.el7.x86_64.rpm
net-snmp-agent-libs-5.7.2-49.el7.x86_64.rpm perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
net-snmp-libs-5.7.2-49.el7.x86_64.rpm perl-Time-Local-1.2300-2.el7.noarch.rpm
net-tools-2.0-0.25.20131004git.el7.x86_64.rpm psmisc-22.20-17.el7.x86_64.rpm
perl-5.16.3-297.el7.x86_64.rpm sysstat-10.1.5-19.el7.x86_64.rpm
perl-Carp-1.26-244.el7.noarch.rpm tcpdump-4.9.2-4.el7_7.1.x86_64.rpm
perl-constant-1.27-2.el7.noarch.rpm telnet-0.17-66.el7.x86_64.rpm
perl-Encode-2.51-7.el7.x86_64.rpm unzip-6.0-21.el7.x86_64.rpm
perl-Exporter-5.68-3.el7.noarch.rpm wget-1.14-18.el7_6.1.x86_64.rpm
perl-File-Path-2.09-2.el7.noarch.rpm
[root@localhost /]#
[root@localhost mnt]# cd /mnt/ && tar -zcvf softrpms.tar.gz
[root@localhost mnt]# ls
softrpms softrpms.tar.gz
[root@localhost mnt]#
[root@localhost mnt]# mkdir kernel
[root@localhost mnt]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@localhost mnt]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
获取http://elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
准备中... ################################# [100%]
正在升级/安装...
1:elrepo-release-7.0-4.el7.elrepo ################################# [100%]
[root@localhost mnt]#
[root@localhost mnt]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* elrepo-kernel: elrepo.0m3n.net
elrepo-kernel | 3.0 kB 00:00:00
elrepo-kernel/primary_db | 1.8 MB 00:00:01
可安装的软件包
elrepo-release.noarch 7.0-5.el7.elrepo elrepo-kernel
kernel-lt.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-devel.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-doc.noarch 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-headers.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-tools.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-lt-tools-libs-devel.x86_64 4.4.247-1.el7.elrepo elrepo-kernel
kernel-ml.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-devel.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-doc.noarch 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-headers.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-tools.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
kernel-ml-tools-libs-devel.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
perf.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
python-perf.x86_64 5.9.12-1.el7.elrepo elrepo-kernel
[root@localhost mnt]#
[root@localhost mnt]# yum install --downloadonly --downloaddir=/mnt/kernel/ --enablerepo="elrepo-kernel" kernel-ml kernel-ml-devel
[root@localhost mnt]# ls kernel/
kernel-ml-5.9.12-1.el7.elrepo.x86_64.rpm
kernel-ml-devel-5.9.12-1.el7.elrepo.x86_64.rpm
perl-5.16.3-297.el7.x86_64.rpm
perl-Carp-1.26-244.el7.noarch.rpm
perl-constant-1.27-2.el7.noarch.rpm
perl-Encode-2.51-7.el7.x86_64.rpm
perl-Exporter-5.68-3.el7.noarch.rpm
perl-File-Path-2.09-2.el7.noarch.rpm
perl-File-Temp-0.23.01-3.el7.noarch.rpm
perl-Filter-1.49-3.el7.x86_64.rpm
perl-Getopt-Long-2.40-3.el7.noarch.rpm
perl-HTTP-Tiny-0.033-3.el7.noarch.rpm
perl-libs-5.16.3-297.el7.x86_64.rpm
perl-macros-5.16.3-297.el7.x86_64.rpm
perl-parent-0.225-244.el7.noarch.rpm
perl-PathTools-3.40-5.el7.x86_64.rpm
perl-Pod-Escapes-1.04-297.el7.noarch.rpm
perl-podlators-2.5.1-3.el7.noarch.rpm
perl-Pod-Perldoc-3.20-4.el7.noarch.rpm
perl-Pod-Simple-3.28-4.el7.noarch.rpm
perl-Pod-Usage-1.63-3.el7.noarch.rpm
perl-Scalar-List-Utils-1.27-248.el7.x86_64.rpm
perl-Socket-2.010-5.el7.x86_64.rpm
perl-Storable-2.45-3.el7.x86_64.rpm
perl-Text-ParseWords-3.29-4.el7.noarch.rpm
perl-threads-1.87-4.el7.x86_64.rpm
perl-threads-shared-1.43-6.el7.x86_64.rpm
perl-Time-HiRes-1.9725-3.el7.x86_64.rpm
perl-Time-Local-1.2300-2.el7.noarch.rpm
[root@localhost mnt]# tar -zcvf kernel.tar.gz kernel/
[root@localhost mnt]# ls
kernel kernel.tar.gz softrpms softrpms.tar.gz
[root@localhost mnt]#
[root@localhost mnt]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
[root@localhost mnt]# grub2-set-default 0
#其中GRUB_DEFAULT=0
[root@localhost mnt]# vi /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=0
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"
GRUB_DISABLE_RECOVERY="true
[root@localhost mnt]# grub2-mkconfig -o /boot/grub2/grub.cfg
[root@localhost mnt]# package-cleanup --oldkernels
[root@localhost ~]# mkdir /mnt/kubeadmSoft
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@localhost ~]# yum list kubelet kubeadm kubectl --disableexcludes=kubernetes |sort -r
已加载插件:fastestmirror
已安装的软件包
* updates: mirror.lzu.edu.cn
Loading mirror speeds from cached hostfile
kubelet.x86_64 1.19.4-0 installed
kubectl.x86_64 1.19.4-0 installed
kubeadm.x86_64 1.19.4-0 installed
* extras: mirror.lzu.edu.cn
* elrepo: mirrors.tuna.tsinghua.edu.cn
* base: mirror.lzu.edu.cn
[root@localhost ~]#
[root@localhost ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --downloaddir=/mnt/kubeadmSoft/ --downloadonly
[root@localhost mnt]# ls kubeadmSoft/
14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm
318243df021e3a348c865a0a0b3b7d4802fe6c6c1f79500c6130d5c6b628766c-kubelet-1.19.4-0.x86_64.rpm
afa24df75879f7793f2b22940743e4d40674f3fcb5241355dd07d4c91e4866df-kubeadm-1.19.4-0.x86_64.rpm
c1fdeadba483d54bedecb0648eb7426cc3d7222499179911961f493a0e07fcd0-kubectl-1.19.4-0.x86_64.rpm
conntrack-tools-1.4.4-7.el7.x86_64.rpm
db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
socat-1.7.3.2-2.el7.x86_64.rpm
[root@localhost mnt]# tar -zcvf kubeadmSoft.tar.gz kubeadmSoft/
[root@localhost mnt]# ls
kubeadmSoft kubeadmSoft.tar.gz
[root@localhost mnt]#
选择你需要的版本后 通过GitHub docker源码地址获取docker.service文件将二进制文件安装的docker加入systemd
https://github.com/moby/moby/tree/master/contrib/init/systemd
注意获取docker.service.rpm的文件
也可以直接将下面内容复制,保存为docker.service文件
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
#Restart=on-failure
StartLimitBurst=3
#StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
global_defs {
notification_email {
root@localhost
}
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 51
priority 250
advert_int 3
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
check_haproxy
}
unicast_src_ip 192.168.91.131
unicast_peer {
192.168.91.133
192.168.91.134
}
virtual_ipaddress {
192.168.91.130
}
}
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 15s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 15s
timeout check 10s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats #http://ip:8006/stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend kubernetes
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-apiServer
backend k8s-apiServer
mode tcp
option tcplog
option httpchk GET /healthz
http-check expect string ok
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server m1 192.168.91.131:6443 check check-ssl verify none
server m2 192.168.91.133:6443 check check-ssl verify none
server m3 192.168.91.134:6443 check check-ssl verify none
[root@localhost ~]# wget https://docs.projectcalico.org/manifests/calico.yaml
如果出现无法解析docs.projectcalico.org 可以尝试配置网卡的DNS地址
[root@localhost mnt]# kubeadm config print init-defaults > kubeadm.conf
kubeadm config print init-defaults --component-configs KubeProxyConfiguration > KubeProxyConfiguration.conf
[root@localhost mnt]# kubeadm config print init-defaults --component-configs KubeletConfiguration > KubeletConfiguration.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
kubeletExtraArgs:
cgroup-driver: "systemd"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
controlPlaneEndpoint: "k8s.vip:8443"
imageRepository: 192.168.91.135:88/k8s.gcr.io
kubernetesVersion: v1.19.4
networking: #设置集群网络
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.68.0.0/16
apiServer:
certSANs:
- "k8s.vip"
etcd:
local:
dataDir: "/var/lib/etcd"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
systemReserved:
cpu: "0.25"
memory: 128Mi
imageGCHighThresholdPercent: 85 #磁盘使用率超过 出发GC
imageGCLowThresholdPercent: 80 #磁盘使用率低于 停止GC
imageMinimumGCAge: 2m0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
failSwapOn: false
clusterDomain: cluster.local
rotateCertificates: true #开启证书轮询
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
bindAddressHardFail: false
mode: "ipvs"
#iptables:
# masqueradeAll: false
# masqueradeBit: null
# minSyncPeriod: 0s
# syncPeriod: 0s
ipvs:
# 如果node提供lvs服务,排除以下CIDR 不受kube-proxy管理,避免刷掉lvs规则
excludeCIDRs: [1.1.1.0/24,2.2.2.0/24]
minSyncPeriod: 1s
scheduler: "wrr"
syncPeriod: 10s
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/
or https://godoc.org/k8s.io/kube-proxy/config/v1alpha1#KubeProxyConfiguration
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ or https://godoc.org/k8s.io/kubelet/config/v1beta1#KubeletConfiguration
kubeadm config print join-defaults
#!/bin/bash
aliyunRegistry="registry.aliyuncs.com/google_containers"
kubernetesRegistry="k8s.gcr.io"
kubernetesImages=`kubeadm config images list --kubernetes-version=1.19.4`
imageDir="/mnt/k8simages"
mkdir -p $imageDir
function imagePull() {
local imageTag=$1
docker pull $imageTag
}
function imageT(){
local imageTag=$1
local imageName=$2
docker tag $imageTag $kubernetesRegistry/$imageName
imageSave $kubernetesRegistry/$imageName $imageName
}
function imageSave(){
local imageTag=$1
local imageName=$2
docker save $imageTag | gzip > $imageDir/$imageName.tgz
}
IFS=$'\n'
for image in $kubernetesImages ; do
imageName=${image#*/}
tag=$aliyunRegistry/$imageName
imagePull $tag
imageT $tag $imageName
done
[root@localhost mnt]# ls k8simages/
coredns:1.7.0.tgz kube-apiserver:v1.19.4.tgz kube-proxy:v1.19.4.tgz pause:3.2.tgz
etcd:3.4.13-0.tgz kube-controller-manager:v1.19.4.tgz kube-scheduler:v1.19.4.tgz
[root@localhost mnt]#
#!/bin/bash
dir=/mnt/calico
mkdir -p $dir
for i in `cat calico.yaml | grep image | awk '{print $2}'` ; do
imageName=${i##*/}
docker pull $i
docker save $i | gzip >$dir/$imageName.tgz
done
[root@localhost mnt]# ls calico
cni:v3.17.0.tgz kube-controllers:v3.17.0.tgz node:v3.17.0.tgz pod2daemon-flexvol:v3.17.0.tgz
[root@localhost mnt]#