系统初始化

集群规划

主机名

地址

组件

备注

k8s-m-1

10.66.10.26

ha+keeplaive

kube-apiserver

kube-controller_manager

kube-scheduler

 

vip :10.66.10.3

k8s-m-2

10.66.10.27



k8s-m-3

10.66.10.28



k8s-w-1

10.66.10.31

docker flannel

kubelet kube-proxy


k8s-w-2

10.66.10.32



k8s-w-3

10.66.10.33



...

...



k8s-w-10

10.66.10.40



主机名

设置永久主机名称,然后重新登录:(所有机器操作)

hostnamectl set-hostname k8s-m-1
...
hostnamectl set-hostname k8s-w-10

设置的主机名保存在 /etc/hostname 文件中;

如果 DNS 不支持解析主机名称,则需要修改每台机器的 /etc/hosts 文件,添加主机名和 IP 的对应关系:

cat >>/etc/hosts< 
  

设置集群免密登陆

设置root账户可以无密码登陆所有节点

ssh-keygen -t rsa 
ssh-copy-id root@k8s-m-1
…
ssh-copy-id root@k8s-w-10

安装依赖包

yum install -y epel-release
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget

 

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat 
iptables -X -t nat && iptables -P FORWARD ACCEPT

 

关闭ESLINUX

setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

 

优化内核参数

cat >> /etc/sysctl.d/kubernetes.conf < 
  

 

设置时间同步

timedatectl set-timezone Asia/Shanghai
timedatectl set-timezone Asia/Tokyo

关闭无关服务

systemctl stop postfix && systemctl disable postfix

 

升级系统内核

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:

  1. 高版本的 docker(1.13 以后) 启用了 3.10 kernel 实验支持的 kernel memory account 功能(无法关闭),当节点压力大如频繁启动和停止容器时会导致 cgroup memory leak;

  2. 网络设备引用计数泄漏,会导致类似于报错:"kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1";

解决方案如下:

  1. 升级内核到 4.4.X 以上;

  2. 或者,手动编译内核,disable CONFIG_MEMCG_KMEM 特性;

  3. 或者,安装修复了该问题的 Docker 18.09.1 及以上的版本。但由于 kubelet 也会设置 kmem(它 vendor 了 runc),所以需要重新编译 kubelet 并指定 GOFLAGS="-tags=nokmem";

 

yum -y update
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org


rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm


yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
yum --enablerepo=elrepo-kernel install kernel-lt.x86_64 -y
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
sudo grub2-set-default

安装内核源文件(可选,在升级完内核并重启机器后执行):

 

创建相关路径

mkdir -p /opt/k8s/{bin,work} /etc/{kubernetes,etcd}/cert

 

设置环境变量

echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
source /root/.bashrc

 

 

创建CA证书和密钥

kubernetes 系统各组件需要使用 x509 证书对通信进行加密和认证。

CA (Certificate Authority) 是自签名的根证书,用来签名后续创建的其它证书。

CA 证书是集群所有节点共享的,只需要创建一次,后续用它签名其它所有证书。 

安装cfssl工具

mkdir -p /opt/k8s/cert && cd /opt/k8s/work
wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64
mv cfssl_1.4.1_linux_amd64 /opt/k8s/bin/cfssl


wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
mv cfssljson_1.4.1_linux_amd64 /opt/k8s/bin/cfssljson


wget https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl-certinfo_1.4.1_linux_amd64
mv cfssl-certinfo_1.4.1_linux_amd64 /opt/k8s/bin/cfssl-certinfo


chmod +x /opt/k8s/bin/*
export PATH=/opt/k8s/bin:$PA

 

创建配置文件

cd /opt/k8s/work
cat > ca-config.json < 
  

 

创建证书签名请求文件

cd /opt/k8s/work
cat > ca-csr.json < 
  

生成证书CA和私钥

cd /opt/k8s/work
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*

 

分发证书文件

mkdir -p /etc/kubernetes/cert
scp ca*.pem ca-config.json root@k8s-m-1:/etc/kubernetes/cert/
…
scp ca*.pem ca-config.json root@k8s-w-10:/etc/kubernetes/cert/

 

部署haproxy+keepalived

此处的keeplived的主要作用是为haproxy提供vip(172.19.201.242),在三个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。

 

安装keepalived

yum install -y keepalived

配置keepalived:

[注意:VIP地址是否正确,且各个节点的priority不同,master1节点为MASTER,其余节点为BACKUP,killall -0 意思是根据进程名称检测进程是否存活]

 

cat > /etc/keepalived/keepalived.conf < 
  

 

scp -pr  /etc/keepalived/keepalived.conf root@k8s-m-1:/etc/keepalived/   (master节点)

 

1.killall -0 根据进程名称检测进程是否存活,如果服务器没有该命令,请使用yum install psmisc -y安装

2.第一个master节点的state为MASTER,其他master节点的state为BACKUP

3.priority表示各个节点的优先级,范围:0~250(非强制要求)

 

启动并检测服务

systemctl enable keepalived.service 
systemctl start keepalived.service
systemctl status keepalived.service 
ip address show eth0

 

部署haproxy【所有master】

此处的haproxy为apiserver提供反向代理,haproxy将所有请求轮询转发到每个master节点上。相对于仅仅使用keepalived主备模式仅单个master节点承载流量,这种方式更加合理、健壮。

 

安装haproxy

yum install -y haproxy

配置haproxy【三个master节点一样】

cat >> /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------


#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2


    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon


    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats


#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000


#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiserver
    mode                 tcp
    bind                 *:8443
    option               tcplog
    default_backend      kubernetes-apiserver




#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  k8s-m-1 10.66.10.26:6443 check
    server  k8s-m-2 10.66.10.27:6443 check
    server  k8s-m-3 10.66.10.28:6443 check


#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admi

 

 

把配置文件拷贝到其他两台master节点

scp -pr /etc/haproxy/haproxy.cfg root@k8s-m-1:/etc/haproxy/

 

启动并检测服务

 systemctl enable haproxy.service 
 systemctl start haproxy.service 
 systemctl status haproxy.service 
 ss -lnt | grep -E "8443|1080"

 

安装和配置 kubectl

下载kubectl二进制文件

https://dl.k8s.io/v1.18.2/kubernetes-server-linux-amd64.tar.gz
tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /opt/k8s/work/

 

分发k8s工具到节点上

cd /opt/k8s/work/kubernetes/server/bin

master:

scp -pr kube-apiserver kubectl kube-controller-manager kube-scheduler root@k8s-m-1:/opt/k8s/bin/
...
scp -pr kube-apiserver kubectl kube-controller-manager kube-scheduler root@k8s-m-3:/opt/k8s/bin/

node:

scp -pr kubelet kube-proxy root@k8s-w-1:/opt/k8s/bin/
...
scp -pr kubelet kube-proxy root@k8s-w-10:/opt/k8s/bin/

 

创建admin证书和私钥

kubectl 使用 https 协议与 kube-apiserver 进行安全通信,kube-apiserver 对 kubectl 请求包含的证书进行认证和授权。

kubectl 后续用于集群管理,所以这里创建具有最高权限的 admin 证书。

创建证书签名请求:

cd /opt/k8s/work
cat > admin-csr.json < 
  

 

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

创建 kubeconfig 文件

kubectl 使用 kubeconfig 文件访问 apiserver,该文件包含 kube-apiserver 的地址和认证信息(CA 证书和客户端证书):

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=https://10.66.10.3:8443 \
  --kubeconfig=kubectl.kubeconfig


# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=/opt/k8s/work/admin.pem \
  --client-key=/opt/k8s/work/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig
 
# 设置上下文参数 
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

 

分发 kubeconfig 文件

mkdir -p ~/.kube
scp kubectl.kubeconfig root@k8s-m-1:~/.kube/config
…
scp kubectl.kubeconfig root@k8s-m-3:~/.kube/config

 

 

部署 etcd 集群

下载和分发 etcd 二进制文件

wget https://github.com/coreos/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz

 

创建etcd证书和私钥

cat > etcd-csr.json < 
  

 

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \
    -ca-key=/opt/k8s/work/ca-key.pem \
    -config=/opt/k8s/work/ca-config.json \
    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

 

分发生成的证书和私钥到各 etcd 节点:

mkdir -p /etc/etcd/cert
scp -pr etcd*.pem root@k8s-m-1:/etc/etcd/cert/
...
scp -pr etcd*.pem root@k8s-m-2:/etc/etcd/cert/

 

cat >> /etc/systemd/system/etcd.service < 
  

 

启动 etcd 服务

mkdir -p /data/k8s/etcd/data
mkdir -p /data/k8s/etcd/{data,wal}
systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd

 

验证服务状态

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
--endpoints=https://10.66.10.26:2379,https://10.66.10.27:2379,https://10.66.10.28:2379 \
--cacert=/etc/kubernetes/cert/ca.pem \
--cert=/etc/etcd/cert/etcd.pem \
--key=/etc/etcd/cert/etcd-key.pem endpoint health

二进制搭建K8S高可用集群(1.18.2)_第1张图片

查看当前的 leader

ETCDCTL_API=3 /opt/k8s/bin/etcdctl \
  -w table --cacert=/etc/kubernetes/cert/ca.pem \
  --cert=/etc/etcd/cert/etcd.pem \
  --key=/etc/etcd/cert/etcd-key.pem \
  --endpoints=https://10.66.10.26:2379,https://10.66.10.27:2379,https://10.66.10.28:2379 endpoint status

       二进制搭建K8S高可用集群(1.18.2)_第2张图片      

 

安装flannel

创建flannel解压路径

mkdir -p /opt/k8s/work/flannel
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz


tar -xzvf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/k8s/work/flannel

分发二进制文件到集群所有节点:

cd /opt/k8s/work
scp flannel/{flanneld,mk-docker-opts.sh} root@k8s-w-1:/opt/k8s/bin/
...
scp flannel/{flanneld,mk-docker-opts.sh} root@k8s-w-10:/opt/k8s/bin/
chmod +x /opt/k8s/bin/*

创建 flannel 证书和私钥

flanneld 从 etcd 集群存取网段分配信息,而 etcd 集群启用了双向 x509 证书认证,所以需要为 flanneld 生成证书和私钥。

创建证书签名请求:

cd /opt/k8s/work
cat > flanneld-csr.json < 
  

该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

 

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

 

将生成的证书和私钥分发到工作节点( worker):

cd /opt/k8s/work
mkdir -p /etc/flanneld/cert
scp flanneld*.pem root@k8s-w-1:/etc/flanneld/cert
...
scp flanneld*.pem root@k8s-w-10:/etc/flanneld/cert

向 etcd 写入集群 Pod 网段信息

注意:本步骤只需执行一次。

cd /opt/k8s/work
etcdctl \
--endpoints=https://10.66.10.26:2379,https://10.66.10.27:2379,https://10.66.10.28:2379 \
--ca-file=/opt/k8s/work/ca.pem \
--cert-file=/opt/k8s/work/flanneld.pem \
--key-file=/opt/k8s/work/flanneld-key.pem \
set /kubernetes/network/config '{"Network":"'172.16.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'

 

 

flanneld 当前版本 (v0.11.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;

写入的 Pod 网段 ${CLUSTER_CIDR} 地址段(如 /16)必须小于 SubnetLen,必须与 kube-controller-manager 的 --cluster-cidr 参数值一致;

 

创建 flanneld 的 systemd unit 文件

cat >> /etc/systemd/system/flanneld.service < 
  

 

启动 flanneld 服务

systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld

 

检查分配给各 flanneld 的 Pod 网段信息

查看集群 Pod 网段(/16):

etcdctl \
--endpoints=https://10.66.10.26:2379,https://10.66.10.27:2379,https://10.66.10.28:2379 \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
get /kubernetes/network/config

输出:

       二进制搭建K8S高可用集群(1.18.2)_第3张图片      

 

查看已分配的 Pod 子网段列表(/24):

etcdctl \
--endpoints=https://10.66.10.26:2379,https://10.66.10.27:2379,https://10.66.10.28:2379  \
--ca-file=/etc/kubernetes/cert/ca.pem \
--cert-file=/etc/flanneld/cert/flanneld.pem \
--key-file=/etc/flanneld/cert/flanneld-key.pem \
ls /kubernetes/network/subnets

       二进制搭建K8S高可用集群(1.18.2)_第4张图片      

flannel.1 网卡的地址为分配的 Pod 子网段的第一个 IP(.0),且是 /32 的地址;

 ip route show |grep flannel.1

       二进制搭建K8S高可用集群(1.18.2)_第5张图片      

验证各节点能通过 Pod 网段互通

在各节点上部署 flannel 后,检查是否创建了 flannel 接口(名称可能为 flannel0、flannel.0、flannel.1 等):

ssh k8s-w-8 "/usr/sbin/ip addr show flannel.1|grep -w inet"

       qigZo9xPLn1y4dnJ.png!thumbnail      

在各节点上 ping 所有 flannel 接口 IP,确保能通:

ssh k8s-w-8 "/usr/sbin/ip addr show flannel.1|grep -w inet"

       二进制搭建K8S高可用集群(1.18.2)_第6张图片      

部署 kube-apiserver 集群

创建 kubernetes-master 证书和私钥

创建证书签名请求:

cat >> kubernetes-csr.json < 
  

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

 

将生成的证书和私钥文件拷贝到所有 master 节点:

mkdir -p /etc/kubernetes/cert
scp kubernetes*.pem root@k8s-m-1:/etc/kubernetes/cert/
...
scp kubernetes*.pem root@k8s-m-3:/etc/kubernetes/cert/

 

创建加密配置文件

cd /opt/k8s/work
cat > encryption-config.yaml < 
  

将加密配置文件拷贝到 master 节点的 /etc/kubernetes 目录下:

scp encryption-config.yaml root@k8s-m-1:/etc/kubernetes/
...
scp encryption-config.yaml root@k8s-m-1:/etc/kubernetes/

创建审计策略文件

cat > audit-policy.yaml < 
  

分发审计策略文件:

scp audit-policy.yaml root@k8s-m-1:/etc/kubernetes/audit-policy.yaml
...
scp audit-policy.yaml root@k8s-m-2:/etc/kubernetes/audit-policy.yaml

 

创建后续访问 metrics-server 或 kube-prometheus 使用的证书

创建证书签名请求:

cd /opt/k8s/work
cat > proxy-client-csr.json < 
  

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem  \
  -config=/etc/kubernetes/cert/ca-config.json  \
  -profile=kubernetes proxy-client-csr.json | cfssljson -bare proxy-client

将生成的证书和私钥文件拷贝到所有 master 节点:

scp proxy-client*.pem root@k8s-m-1:/etc/kubernetes/cert/
...
scp proxy-client*.pem root@k8s-m-3:/etc/kubernetes/cert/

kube-apiserver 的配置文件

cat > /etc/systemd/system/kube-apiserver.service < 
  

 

节点创建和分发kube-apiserver文件

scp /etc/systemd/system/kube-apiserver.service root@k8s-m-1:/etc/systemd/system/kube-apiserver.service
...
scp /etc/systemd/system/kube-apiserver.service root@k8s-m-1:/etc/systemd/system/kube-apiserver.service

启动 kube-apiserver 服务

mkdir -p /data/k8s/k8s/kube-apiserver
systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver

部署 kube-controller-manager 集群

创建 kube-controller-manager 证书和私钥

创建证书签名请求:

cd /opt/k8s/work
cat > kube-controller-manager-csr.json < 
  

hosts 列表包含所有 kube-controller-manager 节点 IP;

CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

将生成的证书和私钥分发到所有 master 节点:

cd /opt/k8s/work
scp kube-controller-manager*.pem root@k8s-m-1:/etc/kubernetes/cert/
...
scp kube-controller-manager*.pem root@k8s-m-3:/etc/kubernetes/cert/

 

创建和分发 kubeconfig 文件

kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书等信息:

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://10.66.10.3:8443" \
  --kubeconfig=kube-controller-manager.kubeconfig


kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig


kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig


kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconf

 

 

拷贝kube-controller-manger证书文件,到所有 master 节点

cp -pr kube-controller-manager.kubeconfig /etc/kubernetes/kube-controller-manager.kubeconfig

 

创建 kube-controller-manager配置文件

cat > /etc/systemd/system/kube-controller-manager.service< 
  

 

创建和分发配置文件

scp /etc/systemd/system/kube-controller-manager.service root@k8s-m-1:/etc/systemd/system/kube-controller-manager.service
...
scp /etc/systemd/system/kube-controller-manager.service root@k8s-m-3:/etc/systemd/system/kube-controller-manager.service

 

启动 kube-controller-manager 服务

mkdir -p /data/k8s/k8s/kube-controller-manager
systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager

 

部署 kube-scheduler 集群

创建 kube-scheduler 证书和私钥

创建证书签名请求:

 

cat > kube-scheduler-csr.json < 
  

 

生成证书和私钥:

cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

 

 

将生成的证书和私钥分发到所有 master 节点:

scp -pr kube-scheduler*.pem root@k8s-m-1:/etc/kubernetes/cert/
…
scp -pr kube-scheduler*.pem root@k8s-m-3:/etc/kubernetes/cert/

 

创建和分发 kubeconfig 文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server="https://10.66.10.3:8443" \
  --kubeconfig=kube-scheduler.kubeconfig


kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig


kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig


kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconf

 

分发 kubeconfig 到所有 master 节点:

 scp -pr kube-scheduler.kubeconfig root@k8s-m-1:/etc/kubernetes/kube-scheduler.kubeconfig
…
 scp -pr kube-scheduler.kubeconfig root@k8s-m-3:/etc/kubernetes/kube-scheduler.kubeconfig

 

创建 kube-scheduler 配置文件

 cat > /etc/kubernetes/kube-scheduler.yaml < 
  

 

 

分发 kube-scheduler 配置文件到所有 master 节点:

scp -pr  /etc/kubernetes/kube-scheduler.yaml root@k8s-m-1:/etc/kubernetes/kube-scheduler.yaml
…
scp -pr  /etc/kubernetes/kube-scheduler.yaml root@k8s-m-3:/etc/kubernetes/kube-scheduler.yaml

 

 

创建 kube-scheduler 文件

cat > /etc/systemd/system/kube-scheduler.service < 
  

创建和分发 kube-scheduler  文件

scp -pr  /etc/systemd/system/kube-scheduler.service root@k8s-m-1:/etc/systemd/system/kube-scheduler.service
…
scp -pr  /etc/systemd/system/kube-scheduler.service root@k8s-m-3:/etc/systemd/system/kube-scheduler.service

 

启动 kube-scheduler 服务

mkdir -p /data/k8s/k8s/kube-scheduler
systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler

 

部署 docker 组件

下载和分发 docker 二进制文件

到 docker 下载页面 下载最新发布包:

cd /opt/k8s/work
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
tar -xvf docker-18.09.6.tgz

 

分发二进制文件到所有 worker 节点:

cd /opt/k8s/work
scp docker/*  root@k8s-w-1:/opt/k8s/bin/
...
scp docker/* root@k8s-w-10:/opt/k8s/bin/

 

 

在work节点创建和分发 systemd unit 文件

cat > /etc/systemd/system/docker.service < 
  

 

 

 

sudo iptables -P FORWARD ACCEPT
/sbin/iptables -P FORWARD ACCEPT

 

分发 systemd unit 文件到所有 worker 机器:

cd /opt/k8s/work
scp docker.service root@k8s-w-1:/etc/systemd/system/
...
scp docker.service root@k8s-w-10:/etc/systemd/system/

 

配置和分发 docker 配置文件

使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):

cd /opt/k8s/work

分发 docker 配置文件到所有 worker 节点:

mkdir -p  /etc/docker/ /data/k8s/docker/{data,exec}

 

cat /etc/docker/daemon.json
{
    "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://hub-mirror.c.163.com"],
    "insecure-registries": ["docker02:35000"],
    "max-concurrent-downloads": 20,
    "live-restore": true,
    "max-concurrent-uploads": 10,
    "debug": true,
    "data-root": "/data/k8s/docker/data",
    "exec-root": "/data/k8s/docker/exec",
    "log-opts": {
      "max-size": "100m",
      "max-file": "5"
    }
}

 

 

分发 docker 配置文件到所有 worker 节点:

mkdir -p  /etc/docker/ /data/k8s/docker/{data,exec}
scp docker-daemon.json root@k8s-w-1:/etc/docker/daemon.json
...
scp docker-daemon.json root@k8s-w-10:/etc/docker/daemon.json

 

启动 docker 服务

systemctl daemon-reload && systemctl enable docker && systemctl restart docker

 

 

 

 

部署 kubelet 组件

创建 kubelet bootstrap kubeconfig 文件

 

cat > /opt/k8s/bin/environment.sh <>> ${node_name}"


    # 创建 token
    export BOOTSTRAP_TOKEN=$(kubeadm token create \
      --description kubelet-bootstrap-token \
      --groups system:bootstrappers:${node_name} \
      --kubeconfig ~/.kube/config)


    # 设置集群参数
    kubectl config set-cluster kubernetes \
      --certificate-authority=/etc/kubernetes/cert/ca.pem \
      --embed-certs=true \
      --server=${KUBE_APISERVER} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig


    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap \
      --token=${BOOTSTRAP_TOKEN} \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig


    # 设置上下文参数
    kubectl config set-context default \
      --cluster=kubernetes \
      --user=kubelet-bootstrap \
      --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig


    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_name}.kubeconfig

 

查看 kubeadm 为各节点创建的 token:

kubeadm token list --kubeconfig ~/.kube/config

分发 bootstrap kubeconfig 文件到所有 worker 节点

cd /opt/k8s/work/
scp -pr kubelet-bootstrap-k8s-w-1.kubeconfig root@k8s-w-1:/etc/kubernetes/kubelet-bootstrap.kubeconfig
...
scp -pr kubelet-bootstrap-k8s-w-10.kubeconfig root@k8s-w-10:/etc/kubernetes/kubelet-bootstrap.kubeconfig

 

创建和分发 kubelet 参数配置文件

cd /opt/k8s/work/
cat > kubelet-config.yaml < 
  

创建和分发 kubelet 配置文件

(需要修改配置文件,address: "10.66.10.31"为当前节点的IP地址)

scp kubelet-config.yaml root@k8s-w-1:/etc/kubernetes/kubelet-config.yaml
...
scp kubelet-config.yaml root@k8s-w-10:/etc/kubernetes/kubelet-config.yaml

 

创建和分发 kubelet文件

cat > /etc/systemd/system/kubelet.service < 
  

 

创建和分发 kubelet  文件:

scp /etc/systemd/system/kubelet.service root@$k8s-w-1:/etc/systemd/system/kubelet.service
...
scp /etc/systemd/system/kubelet.service root@$k8s-w-10:/etc/systemd/system/kubelet.service

 

授予 kube-apiserver 访问 kubelet API 的权限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes-master

 

Bootstrap Token Auth 和授予权限

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

 

创建工作路径

mkdir -p /data/k8s/k8s/kubelet

 

关闭swapoff, 否则 kubelet 会启动失败

/usr/sbin/swapoff -a

 

启动 kubelet 服务

systemctl daemon-reload  && systemctl restart kubelet && systemctl enable kubelet

 

如果kubelet启动不起来报错如下:

Failed to restart kubelet.service: Unit not found.

 

修改kubelet.service 的配置文件

After=docker.service
Requires=docker.service

 

重启kubelet服务即可

 

 

自动 approve CSR 请求,生成 kubelet client 证书

cd /opt/k8s/work/
cat > csr-crb.yaml < 
  

 

手动 approve server cert csr

基于安全性考虑,CSR approving controllers 默认不会自动 approve kubelet server 证书签名请求,需要手动 approve。

kubectl get csr
kubectl certificate approve csr-bjtp4

 

 

部署kube-proxy 组件

kube-proxy 运行在所有 worker 节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

 

创建 kube-proxy 证书

创建证书签名请求:

cd /opt/k8s/work
cat > kube-proxy-csr.json < 
  
  • CN:指定该证书的 User 为 system:kube-proxy

  • 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;

  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

cd /opt/k8s/work
cfssl gencert -ca=/opt/k8s/work/ca.pem \
  -ca-key=/opt/k8s/work/ca-key.pem \
  -config=/opt/k8s/work/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
ls kube-proxy*

 

创建和分发 kubeconfig 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/k8s/work/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig


kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig


kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig


kubectl config use-context default --kubeconfig=kube-proxy.kubeconf

分发 kubeconfig 文件:

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
for node_name in ${NODE_NAMES[@]}
  do
    echo ">>> ${node_name}"
    scp kube-proxy.kubeconfig root@${node_name}:/etc/kubernetes/
  done

 

创建 kube-proxy 配置文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > kube-proxy-config.yaml.template < 
  
  • bindAddress: 监听地址;

  • clientConnection.kubeconfig: 连接 apiserver 的 kubeconfig 文件;

  • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;

  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;

  • mode: 使用 ipvs 模式;

为各节点创建和分发 kube-proxy 配置文件:

cd /opt/k8s/work
scp kube-proxy-config.yaml.template root@k8s-w-1:/etc/kubernetes/kube-proxy-config.yaml
...
scp kube-proxy-config.yaml.template root@k8s-w-10:/etc/kubernetes/kube-proxy-config.yaml

 

创建和分发 kube-proxy systemd unit 文件

cd /opt/k8s/work
source /opt/k8s/bin/environment.sh
cat > /etc/systemd/system/kube-proxy.service < 
  

分发 kube-proxy systemd unit 文件:

cd /opt/k8s/work
scp kube-proxy.service root@k8s-w-1:/etc/systemd/system/
..
scp kube-proxy.service root@k8s-w-10:/etc/systemd/system/

 

启动 kube-proxy 服务

cd /opt/k8s/work
mkdir -p /data/k8s/k8s/kube-proxy
modprobe ip_vs_rr
systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy

 

 

部署coredns插件

 

下载和配置 coredns

cd /opt/k8s/work
git clone https://github.com/coredns/deployment.git
mv deployment coredns-deployment

创建 coredns

cd /opt/k8s/work/coredns-deployment/kubernetes
./deploy.sh -i 10.254.0.2 -d cluster.local | kubectl apply -f -

 

查看cordons

kubectl get svc,pods -n kube-system| grep coredns

       CP5ZBhl9dL57HVk8.png!thumbnail      

 

测试 Coredns 解析

cat > dig.yaml < 
  

 

查看dig容器

kubectl get pods

 

测试解析内网是否正常

kubectl exec -i dig -n default nslookup kubernetes

       mkBKaLp3nuCbIwSs.png!thumbnail      

 

测试解析外网是否正常

kubectl exec -i dig -n default nslookup baidu.com

       二进制搭建K8S高可用集群(1.18.2)_第7张图片      

 

 

部署ingress-nginx插件

部署但节点ingress-nginx 如下操作:

cd /opt/k8s/work/
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.32.0/deploy/static/provider/cloud/deploy.yaml


mv deploy.yaml ingress-nginx.yaml
kubectl apply -f ingress-nginx.yaml

 

部署高可用ingress-nginx 如下操作:

需求ingress_公网IP 119.254.155.238 通过haproxy 轮询  80 443到 10.66.10.31、10.66.10.32、10.66.10.33

 

1、首先给nodes节点打标签

kubectl label k8s-w-1 usage=ingress
...
kubectl label k8s-w-3 usage=ingress

 

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx




---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - update
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io   # k8s 1.14+
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader-nginx
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - endpoints
    verbs:
      - create
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        usage: ingress
      containers:
        - name: controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=ingress-nginx/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --ingress-class=nginx
            - --configmap=ingress-nginx/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
  namespace: ingress-nginx
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    rules:
      - apiGroups:
          - extensions
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /extensions/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-2.0.3
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.32.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/hostname: k8s-w-1
      containers:
        - name: create
          image: jettech/kube-webhook-certgen:v1.2.0
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc
            - --namespace=ingress-nginx
            - --secret-name=ingress-nginx-admission
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-2.0.3
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 0.32.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/hostname: k8s-w-1
      containers:
        - name: patch
          image: jettech/kube-webhook-certgen:v1.2.0
          imagePullPolicy:
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=ingress-nginx
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-2.0.3
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.32.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  namespace: ingress-ngin

 

 

测试ingress

cat > nginx.yaml < 
  

 

cat > service-nginx.yaml < 
  

 

cat > ingress-nginx.yaml <