主机ip地址 | 主机名 | 主机角色 | 主机配置 | 安装软件 |
---|---|---|---|---|
192.168.3.254 | ha | LB | 2C-1G 10G | nginx单节点,keepalived虚拟vip配置HA比较简单,不再赘述 |
192.168.3.51 | k8s-master1 | master | 4C-2G 50G | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
192.168.3.52 | k8s-master2 | master | 4C-2G 50G | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
192.168.3.53 | k8s-master3 | master | 4C-2G 50G | kube-apiserver、kube-controller-manager、kube-scheduler、etcd |
192.168.3.54 | k8s-work1 | work | 8C-8G 100G | kubelet、kube-proxy、containerd、runc |
192.168.3.55 | k8s-work2 | work | 8C-8G 100G | kubelet、kube-proxy、containerd、runc |
其中etcd严格意义上并不归属于k8s,提供的功能是k8s所需的超集,通常部署在另外3台带有SSD盘的机器上。
软件名称 | 软件版本 | 备注 |
---|---|---|
centos7.9 | 内核版本: 6.0.12 | k8s对于高版本内核比低版本支持的更好 |
Kubernetes | v1.23.14 | 目前较为稳定的版本 |
etcd | 最新版 v3.5.6 | v3.5.5修复了数据损坏的问题 |
calico | 最新版本 v3.24.5 | 网络插件 |
coredns | 最新版本 v1.10.0 | k8s内部域名解析器 |
containerd | 最新版本 v1.6.12 | 管理容器的生命周期 |
runc | 最新版本 v1.1.4 | containerd自带的runc有点问题 |
nginx | v1.22.1 | yum版本自带 |
网络划分 | 网段 | 备注 |
---|---|---|
node网络 | 192.168.3.0/24 | work节点网络 |
service网络 | 10.96.0.0/16 | 网络插件分配 |
pod网络 | 10.244.0.0/16 | 网络插件分配 |
***** 以上出现的ip和ip地址段需要根据 实际 ip全局修改 *****
此操作是为了解决:方便后续操作
hostnamectl set-hostname xxx
此操作是为了解决:方便后续操作
cat >> /etc/hosts << EOF
192.168.3.254 ha
192.168.3.51 k8s-master1
192.168.3.52 k8s-master2
192.168.3.53 k8s-master3
192.168.3.54 k8s-work1
192.168.3.55 k8s-work2
EOF
此操作是为了解决:nftables后端兼容性问题,产生重复的防火墙规则
The iptables tooling can act as a compatibility layer, behaving like iptables but actually configuring nftables. This nftables backend is not compatible with the current kubeadm packages: it causes duplicated firewall rules and breakskube-proxy.
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld 查看关闭状态
此操作是为了解决:允许容器访问宿主机的文件系统
Setting SELinux in permissive mode by runningsetenforce 0andsed …effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet
setenforce 0
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sestatus 查看关闭状态
此操作是为了解决:k8s可能无法正常工作
据说已经有pr,但是官方合入情况未知, https://github.com/kubernetes/kubernetes/issues/53533
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -p
此操作是为了解决:k8s更好的工作
timedatectl set-local-rtc 1
timedatectl set-timezone Asia/Shanghai
yum -y install ntpdate
crontab -e
0 */1 * * * ntpdate time1.aliyun.com
查看计划任务的配置: crontab -l
此操作是为了解决:k8s更好的工作
cat <<EOF >> /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF
此操作是为了解决:kube-proxy更高性能的工作
k8s集群节点安装,ha节点可以不用安装,内核4.18及以下nf_conntrack_ipv4, 内核4.19+ 使用nf_conntrack
yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
此操作是为了解决:使用containerd管理容器生命周期
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
设置开机自启动
systemctl enable --now systemd-modules-load.service
此操作是为了解决:更好的使用k8s
yum -y install perl
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg
此操作是为了解决:更好的使用k8s
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
fs.file-max=52706963
fs.nr_opne=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_probes=3
net.ipv4.tcp_keepalive_intvl=15
net.ipv4.tcp_max_tw_buckets=36000
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_max_orphans=327680
net.ipv4.tcp_orphan_retries=3
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_max_syn_backlog=16384
net.ipv4.ip_conntrack_max=131072
net.ipv4.tcp_max_syn_backlog=16384
net.ipv4.tcp_timestamps=0
net.core.somaxconn=16384
EOF
生效: sysctl --system
此操作是为了解决:方便后续跨机器文件复制拷贝操作
在k8s-master1上产生key,因为后续绝大多数操作都是在这台机器上做,ssh-keygen + 空密码
ssh-keygen
ssh-copy-id root@k8s-master1
ssh-copy-id root@k8s-master2
ssh-copy-id root@k8s-master3
ssh-copy-id root@k8s-work1
ssh-copy-id root@k8s-work2
做完以上配置后,重启机器,确保所有设置生效:reboot -h now
lsmod | grep --color=auto -e ip_vs -e nf_conntrack
lsmod | egrep 'br_netfilter | overlay'
yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git lrzsz
双机keepalived + vip比较简单,不再赘述
cat > /etc/yum.repos.d/nginx.repo <<"EOF"
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
EOF
yum -y install nginx
systemctl enable nginx
systemctl start nginx
cat >> /etc/nginx/nginx.conf <<"EOF"
stream {
log_format proxy '[$time_local] $remote_addr '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/tcp_access.log proxy;
error_log /var/log/nginx/tcp_error.log;
upstream HA {
hash $remote_addr consistent;
server 192.168.3.51:6443 weight=5 max_fails=1 fail_timeout=3s;
server 192.168.3.52:6443 weight=5 max_fails=1 fail_timeout=3s;
server 192.168.3.53:6443 weight=5 max_fails=1 fail_timeout=3s;
}
server {
listen 6443;
proxy_connect_timeout 3s;
proxy_timeout 30s;
proxy_pass HA;
}
}
EOF
k8s-master1上操作
mkdir -p /data/k8s
k8s后续安装会涉及到大量证书相关操作,用cfssl生成证书 (cfssl、cfssljson、cfssl-certinfo)
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64
chmod +x cfssl*
mv cfssl_1.6.3_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.6.3_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.6.3_linux_amd64 /usr/local/bin/cfssl-certinfo
cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "shanjie",
"OU":"pingtaibu"
}
],
"ca": {
"expiry":"876000h"
}
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
server auth 表示client可以使用该ca对server提供的证书进行验证
client auth 表示server可以使用该ca对client提供的证书进行验证
获取最新的etcd二进制文件
wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
tar xvf etcd-v3.5.6-linux-amd64.tar.gz
chmod +x etcd-v3.5.6-linux-amd64/etcd*
k8s仅仅用来存储集群的元数据,etcd严格意义上并不归属于k8s,提供的功能是k8s所需的超集
因为机器有限所以和k8s-master节点部署在一起。etcd通常部署在另外3台带有SSD盘的机器上。
cat > etcd-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.3.51",
"192.168.3.52",
"192.168.3.53"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "shanjie",
"OU":"pingtaibu"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
cat > etcd.conf << EOF
name: etcd1
data-dir: /var/lib/etcd
listen-client-urls: https://192.168.3.51:2379, http://127.0.0.1:2379
listen-peer-urls: https://192.168.3.51:2380
advertise-client-urls: https://192.168.3.51:2379
initial-advertise-peer-urls: https://192.168.3.51:2380
initial-cluster: etcd1=https://192.168.3.51:2380,etcd2=https://192.168.3.52:2380,etcd3=https://192.168.3.53:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new
client-transport-security:
cert-file: /etc/etcd/ssl/etcd.pem
key-file: /etc/etcd/ssl/etcd-key.pem
trusted-ca-file: /etc/etcd/ssl/ca.pem
client-cert-auth: true
peer-transport-security:
cert-file: /etc/etcd/ssl/etcd.pem
key-file: /etc/etcd/ssl/etcd-key.pem
trusted-ca-file: /etc/etcd/ssl/ca.pem
client-cert-auth: true
EOF
cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.conf
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
3个master节点均需要创建:mkdir -p /etc/etcd/ssl
注意不一样的master节点,修改etcd.conf中的name和urls
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master1:/usr/local/bin/
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master2:/usr/local/bin/
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master3:/usr/local/bin/
scp etcd.pem etcd-key.pem ca.pem k8s-master1:/etc/etcd/ssl/
scp etcd.pem etcd-key.pem ca.pem k8s-master2:/etc/etcd/ssl/
scp etcd.pem etcd-key.pem ca.pem k8s-master3:/etc/etcd/ssl/
scp etcd.conf k8s-master1:/etc/etcd/
scp etcd.conf k8s-master2:/etc/etcd/
scp etcd.conf k8s-master3:/etc/etcd/
scp etcd.service k8s-master1:/usr/lib/systemd/system/
scp etcd.service k8s-master2:/usr/lib/systemd/system/
scp etcd.service k8s-master3:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
etcdctl --endpoints="https://192.168.3.51:2379,https://192.168.3.52:2379,https://192.168.3.53:2379" --cacert=/etc/etcd/ssl/ca.pem --key=/etc/etcd/ssl/etcd-key.pem --cert=/etc/etcd/ssl/etcd.pem endpoint status --write-out=table
针对k8s-master节点操作
wget https://dl.k8s.io/v1.23.14/kubernetes-server-linux-amd64.tar.gz
tar xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master1:/usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
scp kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/
scp kubelet kube-proxy k8s-work1:/usr/local/bin/
scp kubelet kube-proxy k8s-work2:/usr/local/bin/
回到工作目录: cd /data/k8s/
56~70为预留ip,可以根据实际情况预留
cat > kube-apiserver-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.3.254",
"10.96.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"192.168.3.51",
"192.168.3.52",
"192.168.3.53",
"192.168.3.54",
"192.168.3.55",
"192.168.3.56",
"192.168.3.57",
"192.168.3.58",
"192.168.3.59",
"192.168.3.60",
"192.168.3.61",
"192.168.3.62",
"192.168.3.63",
"192.168.3.64",
"192.168.3.65",
"192.168.3.66",
"192.168.3.67",
"192.168.3.68",
"192.168.3.69",
"192.168.3.70"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "shanjie",
"OU":"pingtaibu"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
echo "`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" > bootstrap-token.csv
bootstrap-token用于work节点向apiserver申请证书,kubelet的证书由apiserver动态签署
cat > kube-apiserver.conf << EOF
KUBE_API_ARGS="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--anonymous-auth=false \
--bind-address=192.168.3.51 \
--secure-port=6443 \
--advertise-address=192.168.3.51 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/bootstrap-token.csv \
--service-node-port-range=10000-60000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.3.51:2379,https://192.168.3.52:2379,https://192.168.3.53:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--logtostderr=false \
--alsologtostderr=true \
--v=4 \
--log-dir=/var/log/kubernetes"
EOF
cat > kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
3个master节点均需要创建:
mkdir -p /etc/kubernetes/ssl
mkdir -p /var/log/kubernetes
注意不一样的master节点,修改kube-apiserver.conf中的ip
scp ca*.pem kube-apiserver*.pem k8s-master1:/etc/kubernetes/ssl/
scp ca*.pem kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl/
scp ca*.pem kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl/
scp bootstrap-token.csv kube-apiserver.conf k8s-master1:/etc/kubernetes/
scp bootstrap-token.csv kube-apiserver.conf k8s-master2:/etc/kubernetes/
scp bootstrap-token.csv kube-apiserver.conf k8s-master3:/etc/kubernetes/
scp kube-apiserver.service k8s-master1:/usr/lib/systemd/system/
scp kube-apiserver.service k8s-master2:/usr/lib/systemd/system/
scp kube-apiserver.service k8s-master3:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
admin证书用于生成管理员用的config配置文件,必须是 “O”: “system:masters”
cat > admin-csr.json << EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU":"system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
kube.config 是kubectl的配置文件,包含访问apiserver的所有信息(apiserver地址、CA证书、自身使用的证书)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube.config
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
kubectl config use-context kubernetes --kubeconfig=kube.config
3个master节点均需要创建:mkdir /root/.kube
scp kube.config k8s-master1:/root/.kube/config
scp kube.config k8s-master2:/root/.kube/config
scp kube.config k8s-master3:/root/.kube/config
kubectl create clusterrolebinding kube-apiserver:kubectl-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config
echo "export KUBECONFIG=/root/.kube/config" >> /etc/profile
source /etc/profile
验证kubectl是否正常工作:
kubectl cluster-info
kubectl get componentstatuses
hosts 包含所有kube-controller-manager的节点
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"192.168.3.51",
"192.168.3.52",
"192.168.3.53"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU":"system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-controller-manager.config
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.config
kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.config
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.config
cat > kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_ARGS="--secure-port=10257 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.config \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--cluster-signing-duration=876000h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
scp kube-controller-manager*.pem k8s-master1:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/
scp kube-controller-manager.config kube-controller-manager.conf k8s-master1:/etc/kubernetes/
scp kube-controller-manager.config kube-controller-manager.conf k8s-master2:/etc/kubernetes/
scp kube-controller-manager.config kube-controller-manager.conf k8s-master3:/etc/kubernetes/
scp kube-controller-manager.service k8s-master1:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.3.51",
"192.168.3.52",
"192.168.3.53"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU":"system"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-scheduler.config
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.config
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.config
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.config
cat > kube-scheduler.conf << EOF
KUBE_SCHEDULE_ARGS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.config \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
EnvironmentFile=/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULE_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
scp kube-scheduler*.pem k8s-master1:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/
scp kube-scheduler.config kube-scheduler.conf k8s-master1:/etc/kubernetes/
scp kube-scheduler.config kube-scheduler.conf k8s-master2:/etc/kubernetes/
scp kube-scheduler.config kube-scheduler.conf k8s-master3:/etc/kubernetes/
scp kube-scheduler.service k8s-master1:/usr/lib/systemd/system/
scp kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
scp kube-scheduler.service k8s-master3:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
针对k8s-work节点
wget https://github.com/containerd/containerd/releases/download/v1.6.12/cri-containerd-cni-1.6.12-linux-amd64.tar.gz
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-work1:/root/
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-work2:/root/
#k8s-work1 和 k8s-work2 安装
tar xvf cri-containerd-cni-1.6.12-linux-amd64.tar.gz -C /
mkdir /etc/containerd/
containerd config default > /etc/containerd/config.toml
#确保config.toml改为如下配置
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["http://hub-mirror.c.163.com", "https://mirror.ccs.tencentyun.com", "https://registry.cn-hangzhou.aliyuncs.com"]
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
chmod +x runc.amd64
scp runc.amd64 k8s-work1:/usr/local/sbin/runc
scp runc.amd64 k8s-work2:/usr/local/sbin/runc
systemctl daemon-reload
systemctl enable containerd
systemctl start containerd
work节点需要创建目录:mkdir -p /etc/kubernetes/ssl
scp bootstrap-token.csv k8s-work1:/etc/kubernetes/
scp bootstrap-token.csv k8s-work2:/etc/kubernetes/
scp ca.pem k8s-work1:/etc/kubernetes/ssl/
scp ca.pem k8s-work2:/etc/kubernetes/ssl/
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/bootstrap-token.csv)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kubelet-bootstrap.config
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.config
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.config
kubectl config use-context default --kubeconfig=kubelet-bootstrap.config
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.config
cat > kubelet.json <<EOF
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.3.54",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.96.0.2"]
}
EOF
cat > kubelet.conf <<EOF
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.config \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.config \
--config=/etc/kubernetes/kubelet.json \
--cni-bin-dir=/opt/cni/bin \
--cni-conf-dir=/etc/cni/net.d \
--container-runtime=remote \
--cgroup-driver=systemd \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--network-plugin=cni \
--rotate-certificates=true \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 \
--max-pods=1500 \
--root-dir=/etc/cni/net.d \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
每个work节点,注意kubelet.json中address的修改
创建目录:
1. mkdir -p /var/log/kubernetes
2. mkdir -p /var/lib/kubelet
scp kubelet-bootstrap.config kubelet.json kubelet.conf k8s-work1:/etc/kubernetes/
scp kubelet-bootstrap.config kubelet.json kubelet.conf k8s-work2:/etc/kubernetes/
scp kubelet.service k8s-work1:/usr/lib/systemd/system/
scp kubelet.service k8s-work2:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "shanjie",
"OU":"pingtaibu"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-proxy.config
kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.config
kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.config
kubectl config use-context default --kubeconfig=kube-proxy.config
cat > kube-proxy.yml <<EOF
apiversion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.3.54
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.config
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 192.168.3.54:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.3.54:10249
mode: "ipvs"
EOF
cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes KubeProxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
每个work节点,注意kube-proxy.yml中address的修改
创建目录:mkdir -p /var/lib/kube-proxy
scp kube-proxy*.pem k8s-work1:/etc/kubernetes/ssl/
scp kube-proxy*.pem k8s-work2:/etc/kubernetes/ssl/
scp kube-proxy.config kube-proxy.yml k8s-work1:/etc/kubernetes/
scp kube-proxy.config kube-proxy.yml k8s-work2:/etc/kubernetes/
scp kube-proxy.service k8s-work1:/usr/lib/systemd/system/
scp kube-proxy.service k8s-work2:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico.yaml
#放开注释,修改为以下网段
-name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
kubectl apply -f calico.yaml
https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base
#修改
__DNS__DOMAIN__ --> cluster.local
image: registry.k8s.io/coredns/coredns:vXX --> image: coredns/coredns:XX
__DNS__MEMORY__LIMIT__ --> 200Mi
__DNS__SERVER__ --> 10.96.0.2
kubectl apply -f coredns.yaml
cat > coredns.yaml << "EOF"
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.10.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.96.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
EOF
kubectl apply -f nginx.yaml
#验证niginx欢迎页面
192.168.3.54:30001
192.168.3.55:30001
cat > nginx.yaml << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-web
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
type: NodePort
selector:
name: nginx
EOF