k8s-v1.20.10 二进制部署指导文档

k8s-v1.20.10 1master&2node

部署过程中用到的相关文件:
链接:https://pan.baidu.com/s/12mk4bThqXKJ04OPZP2nxWQ 提取码:6666

文章目录

  • k8s-v1.20.10 1master&2node
    • 实验环境
      • 主机网络信息和组件信息
      • 主机证书信息
    • 主机初始化
      • 配置主机名
      • 配置hosts文件
      • 免密登陆
      • 关闭防火墙
      • 关闭交换分区
      • 配置yum源
      • 配置同步时间
      • 升级内核
      • 修改内核参数
      • 安装docker-ce
      • 安装cfssl
    • CA初始化
      • etcd-ca
      • kube-apiserver-ca
      • front-proxy-ca
    • 部署etcd
      • 创建etcd证书
      • 创建etcd配置文件
    • 部署apiserver
      • 上传k8s组件
      • 创建token.csv文件
      • 创建apiserver证书
      • 创建front-proxy-ca证书
      • 创建service公/私钥
      • 创建apiserver配置文件
    • 部署kubectl
      • 创建kubctl证书
      • 创建kubectl的kubeconfig配置文件
    • 部署kube-controller-manager
      • 创建kube-controller-manager证书
      • **创建kube-controller-manager的kubeconfig**
      • 创建kube-controller-manager的配置文件
    • 部署kube-scheduler
      • 创建kube-scheduler证书
      • 创建kube-scheduler的kubeconfig
      • 创建kube-scheduler的配置文件
    • 部署kubelet
      • 创建kubelet的kubeconfig
      • 创建kublet的配置文件
      • 创建kubelet启动文件
      • 创建RBAC规则自动批复CSR
      • Node节点部署Kubelet
    • 部署kube-proxy
      • 创建kube-proxy证书
      • 创建kube-proxy的kubeconfig
      • 创建kube-proxy配置文件
      • 创建kube-proxy启动文件
      • Node节点部署kube-proxy
    • 添加集群角色
    • 部署calico
    • 部署coredns
    • 部署metric server
    • 测试集群网络
      • 创建测试pod
      • 测试pod解析service
      • 测试节点访问kubernetes svc
      • 测试pod间通信

实验环境

主机网络信息和组件信息

K8S集群角色 IP 主机名 安装的组件
master 192.168.0.10 k8s-master-1 apiserver、controller-manager、scheduler、etcd、docker、kubectl、kubelet、kube-proxy、calico、coredns、metric-server
node 192.168.0.11 k8s-node-1 kubelet、kube-proxy、docker、calico
node 192.168.0.12 k8s-node-2 kubelet、kube-proxy、docker、calico

注:正常情况下master节点只负责调度,不负责运行kube-proxy、calico、coredns、metric-server,处于节约资源考虑,这里让master也负责工作

#系统版本 
	Centos7.9(4.19.12-1.el7.elrepo.x86_64)

# 配置
	4GB内存/2vcpu/70G硬盘,开启虚拟化,NAT网络模式
	
# 组件版本
	k8s-server&k8s-node(apiserver、kubectl、kube-scheduler、kube-proxy) 1.20.10
	etcd 3.5.0
	pause: v3.6
	calico/node:v3.20.1
	calico/pod2daemon-flexvol:v3.20.1
	calico/cni:v3.20.1
	coredns/coredns: v1.7.0
	docker: 20.10.8
	metric-server:v0.4.1

# 网络
	service: 10.0.0.0/24
	pod: 10.70.2.0/24

主机证书信息

​ CA机构三套:apiserver一套,etcd一套,api聚合层一套(由于和apiserver共用一套CA会发生冲突这里单独使用一个CA),颁发机构分别为:ca-apiserver,ca-etcd,front-proxy-ca

主机初始化

配置主机名

# master-1
	hostnamectl set-hostname k8s-master-1 && bash

# node-1
	hostnamectl set-hostname k8s-node-1 && bash

# node-2
	hostnamectl set-hostname k8s-node-2 && bash

配置hosts文件

# master,node
cat >> /etc/hosts <<EOF
192.168.0.10 k8s-master-1
192.168.0.11 k8s-node-1
192.168.0.12 k8s-node-2
EOF

免密登陆

# master	
	ssh-keygen -t rsa
	ssh-copy-id -i .ssh/id_rsa.pub root@k8s-node-1
	ssh-copy-id -i .ssh/id_rsa.pub root@k8s-node-2

关闭防火墙

# master,node

# 关闭防火墙
	systemctl disable firewalld --now

# 关闭selinux
	sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
	sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
	setenforce 0

关闭交换分区

# master,node
	swapoff -a && sysctl -w vm.swappiness=0
	sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

配置yum源

# master,node
	curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
	yum install -y yum-utils device-mapper-persistent-data lvm2
	yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
	sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

# 安装基础依赖包
	yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate

配置同步时间

# master,node
# 同步时间
    yum install ntpdate -y
    ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    echo 'Asia/Shanghai' >/etc/timezone
    ntpdate time2.aliyun.com
    
# 加入到crontab
	*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com

升级内核

# master,node

# 更新系统
	yum update -y --exclude=kernel* 

# 将kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm,kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm 上传到二个节点
	for i in k8s-node-1 k8s-node-2; do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm root@$i:/root ;done
	
# 安装内核
	yum localinstall -y kernel-ml*
	
# 所有节点更改内核启动顺序,并在内核开启user namespace
	grub2-set-default  0 && grub2-mkconfig -o /etc/grub2.cfg
	grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
	
# 所有节点重启,检查默认内核是不是4.19
	grubby --default-kernel

修改内核参数

# master,node,# 末尾添加如下内容
cat >> /etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited 
EOF
# master,node
# 如果用firewalld不是很习惯,可以安装iptables
	yum install iptables-services -y
	service iptables stop   && systemctl disable iptables
	
# 安装相应包包
	yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 开启ipvs,不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl enable --now systemd-modules-load.service
# master,node
# 开启k8s内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
sysctl --system

# 查看k8s参数是否生效
reboot
lsmod | grep --color=auto -e ip_vs -e nf_conntrack

安装docker-ce

# master,node

# 安装docker-ce
	yum install docker-ce.* -y

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://ornb7jit.mirror.aliyuncs.com"],
  "default-ipc-mode": "shareable"
}
EOF
systemctl daemon-reload && systemctl enable --now docker

安装cfssl

# master节点安装
# 过程略

CA初始化

注意点:

  1. 所有证书均在master节点生成,然后下发给其他node节点
  2. etcd、apiserver、apiaggregation这里分别使用了三套CA机构来颁发证书,通常情况下etcd、apiserver和与apiserver通信的其他组件可以公用一套CA机构,apiaggregation一套CA机构
# 创建一个文件夹用来存放证书
	mkdir /root/ssl
# 创建CA配置文件
cat > ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
EOF

注解:

字段 解释
signing 表示该证书可用于签名其它证书,生成的 ca.pem 证书中CA=TRUE
server auth 表示 client 可以用该该证书对 server 提供的证书进行验证
client auth 表示 server 可以用该该证书对 client 提供的证书进行验证;
config.json 可以定义多个profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个profile

etcd-ca

# 创建CA请求文件
cat > etcd-ca-csr.json <<EOF
{     
    "CN": "etcd",
    "key": {
        "algo": "rsa",
        "size": 2048
    },  
    "names": [{   
        "C": "CN",
        "L": "hunan",
        "ST": "changsha",
        "O": "k8s",
        "OU": "system"
    }]   
}
EOF

注解:

字段 解释
hosts 这里为空,任意主机都能使用etcd-ca.pem这个证书
CN Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名(User Name),浏览器使用该字段验证网站是否合法,申请 SSL 证书的具体网站域名
C 申请单位所属国家,只能是两个字母的国家码。例如,中国填写为 CN
L Locality,地区,城市
ST State,州,省
O Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group),公司名称
OU 部门名称
# 生成CA证书
[root@k8s-master-1 ssl]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
2021/10/04 09:32:53 [INFO] generating a new CA key and certificate from CSR
2021/10/04 09:32:53 [INFO] generate received request
2021/10/04 09:32:53 [INFO] received CSR
2021/10/04 09:32:53 [INFO] generating key: rsa-2048
2021/10/04 09:32:53 [INFO] encoded CSR
2021/10/04 09:32:53 [INFO] signed certificate with serial number 465476681475479358683323025732390386015350218727

# 查看生成内容
[root@k8s-master-1 ssl]# ls etcd*
etcd-ca.csr  etcd-ca-csr.json  etcd-ca-key.pem  etcd-ca.pem

注解:

  1. etcd-ca-key.pem 生成的私钥
  2. etcd-ca.pem 生成的证书,后续将使用这个去颁发证书

kube-apiserver-ca

# 创建CA请求文件
cat > kube-apiserver-ca-csr.json <<EOF 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [{
      "C": "CN",
      "ST": "hunan",
      "L": "changsha",
      "O": "k8s",
      "OU": "system"
    }]
}
EOF

# 生成CA证书
	cfssl gencert -initca kube-apiserver-ca-csr.json | cfssljson -bare kube-apiserver-ca

front-proxy-ca

# 创建CA请求文件
cat > front-proxy-ca-csr.json <<EOF 
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [{
      "C": "CN",
      "ST": "hunan",
      "L": "changsha",
      "O": "k8s",
      "OU": "system"
    }]
}
EOF

# 生成CA证书
	cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca

部署etcd

​ Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,如果使用3台作为集群可以容忍1台故障,如果5台作为集群可以容忍2台故障,由于本次实验室单节点,这里采用单节点etcd

创建etcd证书

# 创建etcd请求文件
cat > etcd-csr.json<<EOF 
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.0.10"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "hunan",
    "L": "changsha",
    "O": "k8s",
    "OU": "system"
  }]
}
EOF

# 生成证书
	cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd

# 注意:
	hosts需要填上运行etcd的机器的IP地址

创建etcd配置文件

# 上传etcd压缩包
	mv etcd etcdctl etcdutl /usr/bin

# 创建相应文件夹
	mkdir -p /etc/etcd/ssl
	mkdir -p /var/lib/etcd/default.etcd

# 创建etcd配置文件
cat > /etc/etcd/etcd.conf<<EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.10:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd1"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.10:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.0.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

# 创建启动服务文件
cat > /usr/lib/systemd/system/etcd.service <<"EOF"
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
    --cert-file=/etc/etcd/ssl/etcd.pem \
    --key-file=/etc/etcd/ssl/etcd-key.pem \
    --trusted-ca-file=/etc/etcd/ssl/etcd-ca.pem \
    --peer-cert-file=/etc/etcd/ssl/etcd.pem \
    --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
    --peer-trusted-ca-file=/etc/etcd/ssl/etcd-ca.pem \
    --peer-client-cert-auth \
    --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload 

# 将证书移动到相应位置
	cp etcd.pem etcd-key.pem etcd-ca.pem /etc/etcd/ssl/
	
# 启动etcd
	systemctl enable etcd --now
	
# 查看etcd集群状态
[root@k8s-master-1 ssl]# etcdctl --write-out=table --cacert=/etc/etcd/ssl/etcd-ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.0.10:2379  endpoint health
+---------------------------+--------+------------+-------+
|         ENDPOINT          | HEALTH |    TOOK    | ERROR |
+---------------------------+--------+------------+-------+
| https://192.168.0.10:2379 |   true | 6.250211ms |       |
+---------------------------+--------+------------+-------+

注解:

ETCD_DATA_DIR			# 数据目录
ETCD_LISTEN_PEER_URLS	# 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS	# 客户端访问监听地址
ETCD_NAME				# 节点名称,集群中唯一

ETCD_INITIAL_ADVERTISE_PEER_URLS 	# 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS			# 客户端通告地址
ETCD_INITIAL_CLUSTER				# 集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN			# 集群Token
ETCD_INITIAL_CLUSTER_STATE			#加入集群的当前状态,new是新集群,existing表示加入已有集群
--cert-file : 客户端服务器TLS证书文件的路径,etcd会把该证书发送给apiserver,交由apiserver认证
--key-file:客户端服务器TLS密钥文件的路径,后续将由这个私钥加密数据进行通信
--trusted-ca-file:客户端服务器的路径TLS可信CA证书文件,apiserver访问etcd时,发送过来的证书由这个指定的CA去认证
--peer-key-file:对等服务器TLS密钥文件的路径。这是对等流量的关键,用于服务器和客户端
--peer-trusted-ca-file:对等服务器TLS可信CA文件的路径
--peer-client-cert-auth:启用对等客户端证书验证
--client-cert-auth:启用客户端证书验证

注意:正常情况下etcd应该是使用server端证书,apiserver应该使用etcd client端证书,此处为了后期简单维护,二者使用同一个证书

部署apiserver

上传k8s组件

# 上传kubernetes-server二进制包(master)
	cp kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy /usr/bin
	scp kubectl kube-proxy kubelet root@k8s-node-1:/usr/bin
	for i in root@k8s-node-1 root@k8s-node-2; do scp kubelet kube-proxy $i:/usr/bin; done

# 创建相关目录(master,node)
	mkdir -p /etc/kubernetes/ssl
	mkdir -p /var/log/kubernetes

创建token.csv文件

# 格式:token,用户名,UID,用户组,kubelet-bootstrap这个用户要被api-server所信任
cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

# system:kubelet-bootstrap 这个组内置

注:token.csv后边用于给kubelet自动颁发证书所使用的

创建apiserver证书

# 创建apiserver请求文件
cat > kube-apiserver-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.0.10",
    "10.0.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "hunan",
      "L": "changsha",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
EOF

# host
	host内填写运行apiserver的主机IP/VIP,service的第一个IP,其余按照上面填写即可,node节点由于是使用bootstrap机制自动颁发证书,不用将其IP填写进来
	一般情况下hosts字段中IP为所有Master/LB/VIP IP
	
# 生成证书
	cfssl gencert -ca=kube-apiserver-ca.pem -ca-key=kube-apiserver-ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson  -bare kube-apiserver

创建front-proxy-ca证书

# 创建apiaggregation证书请求文件
cat > front-proxy-client-csr.json <<EOF 
{
  "CN": "front-proxy-client",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "hunan",
    "L": "changsha",
    "O": "k8s",
    "OU": "system"
  }]
}
EOF

# 生成证书
	cfssl gencert -ca=front-proxy-ca.pem -ca-key=front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson  -bare front-proxy-client

创建service公/私钥

# 生成私钥
	openssl genrsa -out ./service.key 2048

# 生成公钥
	openssl rsa -in ./service.key -pubout -out ./service.pub

注:这对公私钥主要用于service account

创建apiserver配置文件

#  创建apiserver配置文件
cat > /usr/lib/systemd/system/kube-apiserver.service <<"EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
 
[Service]
ExecStart=/usr/bin/kube-apiserver \
    --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota \
    --anonymous-auth=false \
    --bind-address=192.168.0.10 \
    --secure-port=6443 \
    --advertise-address=192.168.0.10 \
    --insecure-port=0 \
    --authorization-mode=Node,RBAC \
    --runtime-config=api/all=true \
    --enable-bootstrap-token-auth \
    --token-auth-file=/etc/kubernetes/token.csv \
    --service-cluster-ip-range=10.0.0.0/24 \
    --service-node-port-range=30000-50000 \
    --service-account-key-file=/etc/kubernetes/ssl/service.pub \
    --service-account-signing-key-file=/etc/kubernetes/ssl/service.key \
    --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
    --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
    --client-ca-file=/etc/kubernetes/ssl/kube-apiserver-ca.pem \
    --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
    --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
    --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
    --etcd-certfile=/etc/etcd/ssl/etcd.pem \
    --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
    --etcd-servers=https://192.168.0.10:2379 \
    --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem \
    --requestheader-allowed-names=front-proxy-client   \
    --requestheader-extra-headers-prefix=X-Remote-Extra-  \
    --requestheader-group-headers=X-Remote-Group     \
    --requestheader-username-headers=X-Remote-User   \
    --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.pem  \
    --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client-key.pem    \
    --enable-swagger-ui=true \
    --allow-privileged=true \
    --apiserver-count=1 \
    --audit-log-maxage=30 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-path=/var/log/kube-apiserver-audit.log \
    --event-ttl=1h \
    --alsologtostderr=true \
    --logtostderr=false \
    --log-dir=/var/log/kubernetes \
    --v=2
    
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

# 复制证书到相应目录
	cp service.pub service.key kube-apiserver.pem kube-apiserver-key.pem kube-apiserver-ca.pem kube-apiserver-ca-key.pem front-proxy-client.pem front-proxy-client-key.pem front-proxy-ca.pem /etc/kubernetes/ssl/
	cp token.csv /etc/kubernetes


# 启动
	systemctl daemon-reload 
	systemctl enable kube-apiserver.service --now

# 检查是否正常运行
	systemctl status kube-apiserver

# 不携带证书访问
[root@k8s-master-1 ~]# curl -k https://192.168.0.10:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401  #正常
  

# 携带证书访问,这里访问时报证书错误,网上说是csr中names字段组合要不一样,目前未找到具体原因
	curl -v --cert /etc/kubernetes/ssl/kube-apiserver.pem --key /etc/kubernetes/ssl/kube-apiserver-key.pem --cacert /etc/kubernetes/ssl/kube-apiserver-ca.pem https://192.168.0.10:6443/healthz

注解:

--enable-admission-plugins:除了默认启用的插件,还应该额外启动的admission插件

--anonymous-auth:允许匿名请求到API server的安全端口。未被其他身份验证方法拒绝的请求将被视为匿名请求。匿名请求的system username:anonymous,system group name:unauthenticated。默认值:true

--bind-address:apiserver监听地址

--secure-port:HTTPS通信端口号

--advertise-address:向集群成员发布apiserver的IP地址,该地址必须能够被集群的成员访问

--insecure-port:是否使用HTTP访问apiserver,0默认禁止

--authorization-mode:在安全端口上执行授权的有序的插件列表。默认值:AlwaysAllow以逗号分隔的列表:AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node

--runtime-config:api/all=true启用所有apiserver的api

--enable-bootstrap-token-auth:启用允许‘kube-system' namespace中的secrets类型的'bootstrap.kubernetes.io/token'用于TLS引导身份验证

--service-cluster-ip-range:CIDR IP范围,用于分配service 集群IP。不能与分配给节点pod的任何IP范围重叠。默认值:10.0.0.0/24

--service-node-port-range:为NodePort可视性服务保留的端口范围。默认值: 30000-32767

--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用

--service-account-signing-key-file:指向包含service account token签发方当前私钥文件的路径。签发方将用这个私钥签署已发行的ID token。需要设置'TokenRequest' feature gate

--service-account-issuer:Service account token发行者的标识符。发布者将在“iss”声明中断言该标识符。该值是一个字符串或者URL

--token-auth-file:如果设置,将使用该文件通过token身份验证来保护API服务器的安全端口

--tls-cert-file:无论apiserver作为服务端还是客户端,apiserver发送这个证书给除kubelet、etcd、proxy外的组件

--tls-private-key-file:访问时使用到的私钥

--client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书

--kubelet-client-certificate apiserver访问kubelet时所使用的证书和私钥

--kubelet-client-key:apiserver访问kubelet时使用证书的私钥

--etcd-cafile:颁发etcd证书的ca证书,后续apiserver使用这个证书认证etcd发过来的证书

--etcd-certfile:apiserver会etcd通信时会将这个证书发送给etcd认证

--etcd-keyfile:在apiserver与etcd协商出对称秘钥加密信息之前,apiserver会使用这个私钥来加密信息

--etcd-servers:etcd IP地址,如果有多个,逗号隔开

--requestheader-client-ca-file:这个文件用来认证proxy发过来的证书

--requestheader-allowed-names:允许访问的客户端common names列表。客户端common names的名称需要在client-ca-file中进行设置,将其设置为空值时,表示任意客户端都可访问

--requestheader-extra-headers-prefix:要检查的请求标头前缀列表。建议设定为X-Remote-Extra.

--requestheader-group-headers:请求头中需要检查的组名

--requestheader-username-headers:请求头中需要检查的用户名

--proxy-client-cert-file:apiserver访问Aggregator的证书

--proxy-client-key-file:apiserver访问Aggregatior使用的私钥

--enable-swagger-ui:apiserver启用swagger-ui

--allow-privileged:如果为true,允许特权模式的容器。默认值:false

--apiserver-count:集群中运行的apiserver的数量,必须为一个正数。默认值:1在使用时--endpoint-reconciler-type=master-count时启用的

--audit-log-maxage:根据文件名中的编码时间戳,保存审计日志文件的最大天数

--audit-log-maxbackup:保存审计日志文件的最大数量

--audit-log-maxsize:审计日志文件在流转之前的最大大小(以兆字节为单位)

--audit-log-path:如果设置,所有到apiserver的请求都会记录到这个文件中。‘-’表示写入标准输出。

--event-ttl:保留时间的时间。默认值:1h0m0s

--alsologtostderr:所有log输出到标准错误输出

--logtostderr:日志写到标注错误输出,而不是文件,默认值为true

--log-dir:如果为非空,将日志文件写入该目录

--v:日志的日志级别

部署kubectl

创建kubctl证书

# 创建kubectl证书请求文件
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "hunan",
      "L": "changsha",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}
EOF

# 生成证书
	cfssl gencert -ca=kube-apiserver-ca.pem -ca-key=kube-apiserver-ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

# 将证书放入相应位置
	cp admin*.pem /etc/kubernetes/ssl/

注解:

  1. cluster-admin(内置角色,权限最大) 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限
  2. O指定该证书的 Group 为 system:masters,必须是system:masters,否则后面kubectl create clusterrolebinding报错

创建kubectl的kubeconfig配置文件

# 设置集群参数
	kubectl config set-cluster kubernetes --certificate-authority=kube-apiserver.pem --embed-certs=true --server=https://192.168.0.10:6443 --kubeconfig=kube.config

# 设置客户端认证参数
	kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

# 设置上下文参数
	kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
	
# 设置默认上下文
	kubectl config use-context kubernetes --kubeconfig=kube.config

# 拷贝到指定目录
    mkdir -p ~/.kube
    cp -i kube.config ~/.kube/config
	
# 查看svc
[root@k8s-master-1 ssl]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   52m


# 授权apiserver用户访问kubelet,这个用户在apiserver证书的CN字段声明了,后续apiserver需要与kubelet通信
	kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

部署kube-controller-manager

创建kube-controller-manager证书

# 创建kube-controller-manager证书请求文件
cat > kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.0.10"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "hunan",
        "L": "changsha",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}
EOF

# 生成证书
	cfssl gencert -ca=kube-apiserver-ca.pem -ca-key=kube-apiserver-ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

注解:

  1. system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

创建kube-controller-manager的kubeconfig

# 设置集群参数
	kubectl config set-cluster kubernetes --certificate-authority=kube-apiserver-ca.pem --embed-certs=true --server=https://192.168.0.10:6443 --kubeconfig=kube-controller-manager.kubeconfig

# 设置客户端认证参数
	kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
	
# 设置上下文参数
	kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

# 设置默认上下文
	kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

创建kube-controller-manager的配置文件

# 创建kube-controller-manager启动配置文件
cat > /usr/lib/systemd/system/kube-controller-manager.service <<"EOF"
[Unit]                                                                     
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]      
ExecStart=/usr/bin/kube-controller-manager \
    --port=10252 \
    --secure-port=10257 \
    --bind-address=127.0.0.1 \
    --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
    --service-cluster-ip-range=10.0.0.0/24 \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/etc/kubernetes/ssl/kube-apiserver-ca.pem \
    --cluster-signing-key-file=/etc/kubernetes/ssl/kube-apiserver-ca-key.pem \
    --cluster-signing-duration=87600h \
    --allocate-node-cidrs=true \
    --cluster-cidr=10.70.2.0/24 \
    --root-ca-file=/etc/kubernetes/ssl/kube-apiserver-ca.pem \
    --service-account-private-key-file=/etc/kubernetes/ssl/service.key \
    --use-service-account-credentials=true \
    --leader-elect=true \
    --feature-gates=RotateKubeletServerCertificate=true,RotateKubeletClientCertificate=true \
    --controllers=*,bootstrapsigner,tokencleaner \
    --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
    --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
	--requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem \
	--requestheader-allowed-names=front-proxy-client   \
	--requestheader-extra-headers-prefix=X-Remote-Extra-  \
	--requestheader-group-headers=X-Remote-Group     \
	--requestheader-username-headers=X-Remote-User   \
	--horizontal-pod-autoscaler-use-rest-clients=true \
    --alsologtostderr=true \
    --logtostderr=false \
    --log-dir=/var/log/kubernetes \
    --v=2      
Restart=on-failure
RestartSec=5   
[Install]      
WantedBy=multi-user.target
EOF

# 复制文件
	cp kube-controller-manager*.pem /etc/kubernetes/ssl/
	cp kube-controller-manager.kubeconfig /etc/kubernetes/

# 启动服务
	systemctl daemon-reload
	systemctl enable kube-controller-manager --now

# 检查kube-controller-manager运行状态
[root@k8s-master-1 ssl]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-0               Healthy     {"health":"true","reason":""}

注解:

--port:用于HTTP通信,默认端口为10252,kubectl get cs默认是使用HTTP方式查询kube-controller-manager组件状态

--secure-port:用于HTTPS通信

--bind-address:由于只和apiserver通信,监听地址可以设置为127.0.0.1

--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver,这里面包含了CA证书

--service-cluster-ip-range:CIDR IP范围,用于分配service 集群IP。不能与分配给节点pod****的任何IP范围重叠。默认值:10.0.0.0/24

--cluster-name:集群实例的前缀,默认值:"kubernetes"

--cluster-signing-cert-file:由于bootstrap证书实际的发放组件为kube-controller-manager,kubelet要与apiserver互信,所以需要使用和apiserver相同的CA机构,kube-controller-manager将使用这个ca证书来为kubelet颁发证书

--cluster-signing-key-file:私钥

--cluster-signing-duration:bootstrap颁发的证书,默认时间一年

--allocate-node-cidrs:是否应在云提供商上分配和设置Pod的CIDR

--cluster-cidr:集群中 Pods的CIDR 范围。要求 --allocate-node-cidrs 标志为 true

--root-ca-file:如果此标志非空,则在服务账号的令牌 Secret 中会包含此根证书机构。 所指定标志值必须是一个合法的 PEM 编码的 CA 证书包

--service-account-private-key-file:与apiserver中的–service-account-key-file为配对的公私钥对

--use-service-account-credentials:当此标志为 true 时,为每个控制器单独使用服务账号凭据

--leader-elect:在执行主循环之前,启动领导选举(Leader Election)客户端,并尝试获得领导者身份。 在运行多副本组件时启用此标志有助于提高可用性

--feature-gates:一组 key=value 对,用来描述测试性/试验性功能的特性门控(Feature Gate),RotateKubeletServerCertificate=true 选项,则 kubelet 在证书即将到期时会自动发起一个 renew 自己证书的 CSR 请求;同时 controller manager 需要在启动时增加 --feature-gates=RotateKubeletServerCertificate=true 参数,再配合相应创建好的 ClusterRoleBinding,kubelet client 和 kubelet server 证才书会被自动签署

--controllers:要启用的控制器列表。\* 表示启用所有默认启用的控制器; foo 启用名为 foo 的控制器; -foo 表示禁用名为 foo 的控制器,默认禁用的控制器有:bootstrapsigner 和 tokencleaner,所以这里需要添加上

--tls-*-file:kube-controller-manager访问apiserver使用的证书和秘钥,该证书`必须要是与apiserver同一个CA机构颁发的证书`

--requestheader*:参照apiserver解释

--alsologtostderr:在向文件输出日志的同时,也将日志写到标准输出

--log-dir:日志存储路径

--v2:记录日志等级

部署kube-scheduler

创建kube-scheduler证书

# 创建证书请求文件
cat > kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.0.10"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "hunan",
        "L": "changsha",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}
EOF

注:
	O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限
	
# 生成证书
	cfssl gencert -ca=kube-apiserver-ca.pem -ca-key=kube-apiserver-ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建kube-scheduler的kubeconfig

# 设置集群参数
	kubectl config set-cluster kubernetes --certificate-authority=kube-apiserver-ca.pem --embed-certs=true --server=https://192.168.0.10:6443 --kubeconfig=kube-scheduler.kubeconfig
	
# 设置客户端认证参数
	kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig

# 设置上下文参数
	kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

# 设置默认上下文
	kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

创建kube-scheduler的配置文件

cat > /usr/lib/systemd/system/kube-scheduler.service <<"EOF"
[Unit]                                      
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler  \
    --address=127.0.0.1  \
    --port=10251  \
    --secure-port=10259  \
    --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig  \
    --leader-elect=true  \
    --alsologtostderr=true  \
    --logtostderr=false  \
    --log-dir=/var/log/kubernetes  \
     --v=2
 
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
EOF

# 复制文件
	cp kube-scheduler*.pem /etc/kubernetes/ssl/
	cp kube-scheduler.kubeconfig /etc/kubernetes/

# 启动服务
	systemctl daemon-reload
	systemctl enable kube-scheduler.service --now

# 查看服务状态
[root@k8s-master-1 ssl]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""} 

部署kubelet

注意:

本文由于master节点需要运行calico,coredns等系统组件(pod方式运行),所以master节点需要部署kubelet和kube-proxy

# 截取token
BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)

创建kubelet的kubeconfig

# 设置集群参数
	kubectl config set-cluster kubernetes --certificate-authority=kube-apiserver-ca.pem --embed-certs=true --server=https://192.168.0.10:6443 --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置客户端认证参数
	kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置上下文参数
	kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig

# 设置默认上下文
	kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig

创建kublet的配置文件

注:

  1. “cgroupDriver”: "systemd"要和docker的驱动一致。
  2. address替换为运行kubelet节点的IP地址。
# k8s-master-1节点
cat > k8s-master-1-kubelet.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/kube-apiserver-ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.0.10",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.0.0.10"]
}
EOF

# k8s-node-1节点
cat > k8s-node-1-kubelet.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/kube-apiserver-ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.0.11",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.0.0.10"]
}
EOF

# k8s-node-2节点
cat > k8s-node-2-kubelet.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/kube-apiserver-ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.0.12",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.0.0.10"]
}
EOF

创建kubelet启动文件

# 创建kubelet启动配置,master节点
cat > kubelet.service<<"EOF"
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet \
    --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
    --cert-dir=/etc/kubernetes/ssl \
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
    --config=/etc/kubernetes/kubelet.json \
    --network-plugin=cni \
    --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 \
    --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true  \
    --rotate-certificates=true  \
    --alsologtostderr=true \
    --logtostderr=false \
    --log-dir=/var/log/kubernetes \
    --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target
EOF

# 移动相关文件
	mkdir -p /var/lib/kubelet
	cp kubelet.service /usr/lib/systemd/system/
	cp k8s-master-1-kubelet.json /etc/kubernetes/kubelet.json
	cp kubelet-bootstrap.kubeconfig /etc/kubernetes/

# 启动服务
	systemctl daemon-reload
	systemctl enable kubelet --now

创建RBAC规则自动批复CSR

apiserver 自动创建了两条 ClusterRole,分别是

  1. system:certificates.k8s.io:certificatesigningrequests:nodeclient
  2. system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
# 我们再增加一条
cat <<EOF | kubectl apply -f -
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
EOF


# 将ClusterRole绑定到适当的用户组,以完成自动批准相关CSR请求,此处的system:bootstrappers组与token.csv中的组对应

# token.csv,格式 Token,用户名,UID,用户组
fbecd7fb7d3c75efc7f8bd8c0896addf,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

# 允许 system:bootstrappers 组用户创建 CSR 请求
	kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:kubelet-bootstrap

# 自动批准 system:bootstrappers 组用户 TLS bootstrapping 首次申请证书的 CSR 请求,clusterrolebinding kubelet-bootstrap及node-client-auto-approve-csr 中的--group=system:kubelet-bootstrap 可以替换为--user=kubelet-bootstrap,与token.csv保持一致
	kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:kubelet-bootstrap

# 自动批准 system:nodes 组用户更新 kubelet 自身与 apiserver 通讯证书的 CSR 请求
	kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes

# 自动批准 system:nodes 组用户更新 kubelet 10250 api 端口证书的 CSR 请求
	kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes
	

# 查看csr,可以发现master节点加入集群后,自动就签发证书了
[root@k8s-master-1 ~]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-7yhdBfn1JE3dUOPfvRLVkRzlljdgno9X0C_X0gqipzg   2m16s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

# 查看节点状态
[root@k8s-master-1 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE    VERSION
k8s-master-1   NotReady   <none>   114s   v1.20.10

Node节点部署Kubelet

# 将相关证书下发给node节点	
	for i in root@k8s-node-1 root@k8s-node-2; do scp kube-apiserver-ca.pem $i:/etc/kubernetes/ssl;scp kubelet.service $i:/usr/lib/systemd/system; scp kubelet-bootstrap.kubeconfig $i:/etc/kubernetes; done
	
	scp k8s-node-1-kubelet.json root@k8s-node-1:/etc/kubernetes/kubelet.json
	scp k8s-node-2-kubelet.json root@k8s-node-2:/etc/kubernetes/kubelet.json

# node节点启动服务
	for i in root@k8s-node-1 root@k8s-node-2; do ssh $i "mkdir -p /etc/kubernetes/ssl;mkdir -p /var/lib/kubelet; mkdir -p /var/log/kubernetes;systemctl daemon-reload; systemctl enable kubelet --now;"; done
	
# 查看node
[root@k8s-master-1 ssl]# kubectl get nodes
NAME           STATUS     ROLES    AGE   VERSION
k8s-master-1   NotReady   <none>   46m   v1.20.10
k8s-node-1     NotReady   <none>   13m   v1.20.10
k8s-node-2     NotReady   <none>   13m   v1.20.10

部署kube-proxy

创建kube-proxy证书

# 创建证书请求文件
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
      "C": "CN",
      "ST": "hunan",
      "L": "changsha",
      "O": "system:kube-proxy",
      "OU": "system"}]
}
EOF

# 生成证书
	cfssl gencert -ca=kube-apiserver-ca.pem -ca-key=kube-apiserver-ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

注解:

  1. CN:指定该证书的 User 为 system:kube-proxy
  2. 预定义的 RoleBinding system:node-proxier 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限
  3. 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空

创建kube-proxy的kubeconfig

# 设置集群参数
	kubectl config set-cluster kubernetes --certificate-authority=kube-apiserver-ca.pem --embed-certs=true --server=https://192.168.0.10:6443 --kubeconfig=kube-proxy.kubeconfig

# 设置客户端认证参数
	kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

# 设置上下文参数
	kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

# 设置默认上下文
	kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建kube-proxy配置文件

# IP换成运行kube-proxy节点IP即可

# 创建k8s-master-1配置文件
cat > k8s-master-1-kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.0.10
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/24
healthzBindAddress: 192.168.0.10:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.0.10:10249
mode: "ipvs"
EOF

# 创建k8s-node-1配置文件
cat > k8s-node-1-kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.0.11
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/24
healthzBindAddress: 192.168.0.11:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.0.11:10249
mode: "ipvs"
EOF

# 创建k8s-node-2配置文件
cat > k8s-node-2-kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.0.12
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/24
healthzBindAddress: 192.168.0.12:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.0.12:10249
mode: "ipvs"
EOF

创建kube-proxy启动文件

# 创建kube-proxy启动文件
cat > kube-proxy.service <<"EOF"
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF

# 拷贝证书,创建相关文件夹
	mkdir -p /var/lib/kube-proxy
	cp kube-proxy.service /usr/lib/systemd/system/
	cp k8s-master-1-kube-proxy.yaml /etc/kubernetes/kube-proxy.yaml
	cp kube-proxy.kubeconfig /etc/kubernetes/

# 启动服务
	systemctl daemon-reload
	systemctl enable kube-proxy.service --now

Node节点部署kube-proxy

scp k8s-node-1-kube-proxy.yaml root@k8s-node-1:/etc/kubernetes/kube-proxy.yaml

scp k8s-node-2-kube-proxy.yaml root@k8s-node-2:/etc/kubernetes/kube-proxy.yaml

for i in root@k8s-node-1 root@k8s-node-2; do scp kube-proxy.service $i:/usr/lib/systemd/system/; scp kube-proxy.kubeconfig $i:/etc/kubernetes; ssh $i "mkdir -p /var/lib/kube-proxy; systemctl daemon-reload; systemctl enable kube-proxy.service --now"; done

添加集群角色

# 查看当前集群状态,默认应该是NotReady,这里由于我部署过calico了,所以显示Ready了,可以看到集群ROLES为none
[root@k8s-master-1 ssl]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master-1   Ready    <none>   15h   v1.20.10
k8s-node-1     Ready    <none>   15h   v1.20.10
k8s-node-2     Ready    <none>   15h   v1.20.10

# 设置k8s-master-1为master节点
	kubectl label nodes k8s-master-1 node-role.kubernetes.io/master=

# 设置k8s-node-*为work节点
	kubectl label nodes k8s-node-1 node-role.kubernetes.io/node=
	kubectl label nodes k8s-node-2 node-role.kubernetes.io/node=

# 设置master一般情况下不接受调度,只接受必须组件的调度
	kubectl taint nodes k8s-master-1 node-role.kubernetes.io/master=true:NoSchedule
	
# 或者设置master节点也能接受调度
	kubectl taint nodes k8s-master-1 node-role.kubernetes.io/master-

# 查看集群当前状态
[root@k8s-master-1 ssl]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master-1   Ready    master   15h   v1.20.10
k8s-node-1     Ready    node     15h   v1.20.10
k8s-node-2     Ready    node     15h   v1.20.10

部署calico

# 下载文件
	curl -O https://docs.projectcalico.org/manifests/calico.yaml
# CALICO_IPV4POOL_CIDR修改为pod IP
# 修改的地方
	- name: CALICO_IPV4POOL_CIDR
	  value: "10.70.2.0/24"      
	- name: IP_AUTODETECTION_METHOD
	  value: interface=ens33
	  
# 运行calico
	kubectl apply -f calico.yaml
[root@k8s-master-1 ssl]# kubectl get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-855445d444-jvxt8   1/1     Running   0          3m22s   10.70.2.65     k8s-node-2     <none>           <none>
kube-system   calico-node-44f94                          1/1     Running   0          3m23s   192.168.0.12   k8s-node-2     <none>           <none>
kube-system   calico-node-bvpdd                          1/1     Running   0          3m23s   192.168.0.11   k8s-node-1     <none>           <none>
kube-system   calico-node-g5p8g                          1/1     Running   0          3m23s   192.168.0.10   k8s-master-1   <none>           <none>

# 查看master节点路由,运行calico之前
[root@k8s-master-1 ssl]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.0.2     0.0.0.0         UG    100    0        0 ens33
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.255.0   U     100    0        0 ens33

# 查看master节点ipvs信息,运行calico之前
[root@k8s-master-1 ssl]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr
  -> 192.168.0.10:6443            Masq    1      0          0   

# 查看master节点路由,运行calico后
[root@k8s-master-1 ssl]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.0.2     0.0.0.0         UG    100    0        0 ens33
10.70.2.0       192.168.0.11    255.255.255.192 UG    0      0        0 tunl0
10.70.2.64      192.168.0.12    255.255.255.192 UG    0      0        0 tunl0
10.70.2.128     0.0.0.0         255.255.255.192 U     0      0        0 *
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
192.168.0.0     0.0.0.0         255.255.255.0   U     100    0        0 ens33

# 注解:
	从上可以看出,calico为node节点能跨主机通信pod,需要在每一台节点运行calico组件,能在宿主机上生成相应的路由信息,当需要跨主机访问pod时,流量会被路由走

部署coredns

cat > coredns.yaml <apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

# 为了让coredns只在master节点运行,修改上述文件
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
      nodeName: k8s-master-1

# 查看pod
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE           NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-855445d444-6glmm   1/1     Running   0          19m     10.70.2.1      k8s-node-1     >           >
kube-system   calico-node-6pkz6                          1/1     Running   0          19m     192.168.0.10   k8s-master-1   >           >
kube-system   calico-node-8nz7s                          1/1     Running   0          19m     192.168.0.12   k8s-node-2     >           >
kube-system   calico-node-z7pwc                          1/1     Running   0          19m     192.168.0.11   k8s-node-1     >           >
kube-system   coredns-6f4c9cb7c5-hj9bj                   1/1     Running   0          5m48s   10.70.2.66     k8s-master-1   >           >

部署metric server

# 创建metrics-server配置文件,这里我设置了只在k8s-master-1节点运行,读者按照自己想法修改即可
cat > metrics-server.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      nodeName: k8s-master-1
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --metric-resolution=30s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.pem
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.4.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/ssl
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/ssl
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
EOF

# 获取node信息
[root@k8s-master-1 ~]# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master-1   146m         7%     1352Mi          35%       
k8s-node-1     84m          4%     726Mi           25%       
k8s-node-2     78m          3%     651Mi           22% 

# 查看pod运行
[root@k8s-master-1 ~]# kubectl get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    IP             NODE           NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-855445d444-6glmm   1/1     Running   0          88m    10.70.2.1      k8s-node-1     <none>           <none>
kube-system   calico-node-6pkz6                          1/1     Running   0          88m    192.168.0.10   k8s-master-1   <none>           <none>
kube-system   calico-node-8nz7s                          1/1     Running   0          88m    192.168.0.12   k8s-node-2     <none>           <none>
kube-system   calico-node-z7pwc                          1/1     Running   0          88m    192.168.0.11   k8s-node-1     <none>           <none>
kube-system   coredns-6f4c9cb7c5-hj9bj                   1/1     Running   0          74m    10.70.2.66     k8s-master-1   <none>           <none>
kube-system   metrics-server-68bdbcc6b-gk6cq             1/1     Running   0          6m5s   10.70.2.68     k8s-master-1   <none>           <none>

测试集群网络

注:busybox最好选用1.28,最新版本有BUG

创建测试pod

cat << EOF | kubectl apply -f -
[root@k8s-master-1 ssl]# cat test-network.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox-1
  namespace: default
spec:
  nodeSelector:
    node-role.kubernetes.io/master: ""
  tolerations:
  - key: node-role.kubernetes.io/master
    operator: Exists
  containers:
  - name: busybox
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    command:
    - sleep
    - "86400"
  restartPolicy: OnFailure
---
apiVersion: v1
kind: Pod
metadata:
  name: busybox-2
  namespace: default
spec:
  nodeSelector:
    node-role.kubernetes.io/node: ""
  containers:
  - name: busybox
    image: busybox:1.28
    imagePullPolicy: IfNotPresent
    command:
    - sleep
    - "86400"
  restartPolicy: OnFailure
EOF

# 查看当前存在svc
[root@k8s-master-1 ssl]# kubectl get svc -A
NAMESPACE     NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP                  45h
kube-system   kube-dns         ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP,9153/TCP   9h
kube-system   metrics-server   ClusterIP   10.0.0.168   <none>        443/TCP                  8h


# 查看当前pod运行情况
[root@k8s-master-1 ssl]# kubectl get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
default       busybox-1                                  1/1     Running   0          18s   10.70.2.73     k8s-master-1   <none>           <none>
default       busybox-2                                  1/1     Running   0          18s   10.70.2.130    k8s-node-2     <none>           <none>
kube-system   calico-kube-controllers-855445d444-6glmm   1/1     Running   1          10h   10.70.2.2      k8s-node-1     <none>           <none>
kube-system   calico-node-6pkz6                          1/1     Running   1          10h   192.168.0.10   k8s-master-1   <none>           <none>
kube-system   calico-node-8nz7s                          1/1     Running   1          10h   192.168.0.12   k8s-node-2     <none>           <none>
kube-system   calico-node-z7pwc                          1/1     Running   1          10h   192.168.0.11   k8s-node-1     <none>           <none>
kube-system   coredns-6f4c9cb7c5-hj9bj                   1/1     Running   1          10h   10.70.2.69     k8s-master-1   <none>           <none>
kube-system   metrics-server-68bdbcc6b-gk6cq             1/1     Running   1          9h    10.70.2.70     k8s-master-1   <none>           <none>

测试pod解析service

# 测试解析同一个namespace下的service
[root@k8s-master-1 ssl]# kubectl exec busybox-1 -- nslookup kubernetes
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

# 跨namespace解析service
[root@k8s-master-1 ssl]# kubectl exec busybox-1 -- nslookup kube-dns.kube-system
Server:    10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

测试节点访问kubernetes svc

# 每个节点都测试一下
[root@k8s-master-1 ~]# telnet 10.0.0.1 443
Trying 10.0.0.1...
Connected to 10.0.0.1.
Escape character is '^]'.
^CConnection closed by foreign host.
[root@k8s-master-1 ~]# telnet 10.0.0.10 53
Trying 10.0.0.10...
Connected to 10.0.0.10.
Escape character is '^]'.

测试pod间通信

[root@k8s-master-1 ssl]# kubectl exec busybox-1 -- ping 10.70.2.130
PING 10.70.2.130 (10.70.2.130): 56 data bytes
64 bytes from 10.70.2.130: seq=0 ttl=62 time=0.603 ms
64 bytes from 10.70.2.130: seq=1 ttl=62 time=0.451 ms

注:

  1. 同namespace之间要能通信
  2. 跨namespace要能通信
  3. 跨机器要能通信

你可能感兴趣的:(kubernetes,docker,kubernetes,k8s,linux)