离线部署K8S集群(二进制方式)

一、前置准备

准备三台服务器,操作系统centos7.x

硬件配置:CPU2核、内存2GB、硬盘30GB

确保机器间网络互通,时钟同步,所需镜像及资源包下载链接:点击下载

规划:

节点名称 IP 安装组件
k8s-master 192.168.14.101 docker、etcd、kubelet、kube-proxy、kube-apiserver、kube-controller-manager、kube-scheduler
k8s-node1 192.168.14.102 docker、etcd、kubelet、kube-proxy
k8s-node2 192.168.14.103 docker、etcd、kubelet、kube-proxy
docker私有仓库 192.168.14.100 docker(服务器配置可以低点)

三台服务器初始化操作

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭 selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0 
# 关闭 swap
swapoff -a 	
sed -ri 's/.*swap.*/#&/' /etc/fstab 

# master节点添加 hosts
cat >> /etc/hosts << EOF
192.168.14.101 k8s-master
192.168.14.102 k8s-node1
192.168.14.103 k8s-node2
EOF

# 将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 生效
sysctl -p
  • 部署docker本地私有镜像仓库 (192.168.14.100)
#将资源包里面的docker_绿色免安装版.tar.gz解压,把docker_inspkg下的所有文件放到 /usr/sbin/目录下
#使用 systemd 管理 docker
vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/sbin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
###########################分 割 线 #############################
#启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
###########################分 割 线 #############################
#新建或修改daemon.json
vim /etc/docker/daemon.json

{
    "registry-mirror" : ["https://hub-mirror.c.163.com"],
    "insecure-registries" : ["192.168.14.100:5000"]
}
###########################分 割 线 #############################
#重启docker服务
systemctl restart docker
###########################分 割 线 #############################
#将资源包里面的docker-images目录下registry镜像源导入
docker load < registry.docker
###########################分 割 线 #############################
#启动容器
mkdir -p /opt/data/registry
docker run -d --name private_registry -p 5000:5000 -v /opt/data/registry:/var/lib/registry --restart=always registry
  • 上传资源包里面的镜像
#将资源包里面的docker-images目录下所有镜像导入(registry.docker已经导过)
docker load -i flanneld-v0.11.0-s390x.docker
docker load -i flanneld-v0.11.0-ppc64le.docker
docker load -i flanneld-v0.11.0-arm.docker
docker load -i flanneld-v0.11.0-arm64.docker
docker load -i flanneld-v0.11.0-amd64.docker
docker load -i dashboard-v2.0.4.docker
docker load -i metrics-scraper-v1.0.4.docker
docker load -i coredns-1.6.2.docker
docker load -i busybox.docker

docker images		#查看镜像
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
registry                       latest              1fd8e1b0bb7e        13 days ago         26.2MB
kubernetesui/dashboard         v2.0.4              46d0a29c3f61        7 months ago        225MB
kubernetesui/metrics-scraper   v1.0.4              86262685d9ab        13 months ago       36.9MB
coredns/coredns                1.6.2               bf261d157914        20 months ago       44.1MB
quay.io/coreos/flannel         v0.11.0-s390x       c5963b81ce28        2 years ago         58.2MB
quay.io/coreos/flannel         v0.11.0-ppc64le     c96a2f3abc08        2 years ago         69.6MB
quay.io/coreos/flannel         v0.11.0-arm64       32ffa9fadfd7        2 years ago         53.5MB
quay.io/coreos/flannel         v0.11.0-arm         ef3b5d63729b        2 years ago         48.9MB
quay.io/coreos/flannel         v0.11.0-amd64       ff281650a721        2 years ago         52.6MB
busybox                        1.28.4              8c811b4aec35        2 years ago         1.15MB

#给刚导入的镜像重新打标签
docker tag busybox:1.28.4 192.168.14.100:5000/busybox
docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.14.100:5000/flannel-amd64
docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.14.100:5000/flannel-arm
docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.14.100:5000/flannel-arm64
docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.14.100:5000/flannel-ppc64le
docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.14.100:5000/flannel-s390x
docker tag coredns/coredns:1.6.2 192.168.14.100:5000/coredns
docker tag kubernetesui/metrics-scraper:v1.0.4 192.168.14.100:5000/metrics-scraper
docker tag kubernetesui/dashboard:v2.0.4 192.168.14.100:5000/dashboard

#打完标签后的镜像上传到本地仓库
docker push 192.168.14.100:5000/busybox
docker push 192.168.14.100:5000/flannel-amd64
docker push 192.168.14.100:5000/flannel-arm
docker push 192.168.14.100:5000/flannel-arm64
docker push 192.168.14.100:5000/flannel-ppc64le
docker push 192.168.14.100:5000/flannel-s390x
docker push 192.168.14.100:5000/coredns
docker push 192.168.14.100:5000/metrics-scraper
docker push 192.168.14.100:5000/dashboard

#删除不需要的镜像
docker rmi busybox:1.28.4
docker rmi docker rmi quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay.io/coreos/flannel:v0.11.0-arm
docker rmi quay.io/coreos/flannel:v0.11.0-arm64
docker rmi quay.io/coreos/flannel:v0.11.0-ppc64le
docker rmi quay.io/coreos/flannel:v0.11.0-s390x
docker rmi coredns/coredns:1.6.2
docker rmi kubernetesui/metrics-scraper:v1.0.4
docker rmi kubernetesui/dashboard:v2.0.4
  • 本地镜像仓库及资源准备完成

二、安装docker(每个节点都要)

  • 解压资源包 (docker_绿色免安装版.tar.gz),将docker_inspkg下的所有文件放到 /usr/sbin/目录下
  • 修改镜像源地址
#新建或修改daemon.json
vim /etc/docker/daemon.json

{
    "registry-mirror" : ["https://hub-mirror.c.163.com"],
    "insecure-registries" : ["192.168.14.100:5000"]
}
  • 使用 systemd 管理 docker
vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/sbin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker

三、部署Etcd集群

注:为了节省机器,这里与 K8s 节点机器复用 , 也可以独立于 k8s 集群之外部署,只要 apiserver 能连接到就行

  • 准备 cfssl 证书生成工具 , 找任意一台服务器操作,这里用 Master 节点

进入下载下来的资源包路径下,将cfssl_linux-amd64目录中的所有文件复制到 /usr/local/bin/ 下

chmod +x cfssl_linux-amd64/*
cp cfssl_linux-amd64/* /usr/local/bin/
  • 创建Etcd目录并解压二进制包
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar -zxf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
  • 创建配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.14.101:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.14.101:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.14.101:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.14.101:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.14.101:2380,etcd-2=https://192.168.14.102:2380,etcd-3=https://192.168.14.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

################################################################################
#ETCD_NAME:节点名称,集群中唯一 
#ETCD_DATA_DIR:数据目录 
#ETCD_LISTEN_PEER_URLS:集群通信监听地址 
#ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
#ETCD_INITIAL_CLUSTER:集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN:集群 Token
#ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群
################################################################################
  • 生成自签 CA证书
mkdir -p   /usr/local/ssl/{etcd_ssl,k8s_ssl}
cd /usr/local/ssl/etcd_ssl
vim ca-config.json

{
	"signing": {
		"default": {
			"expiry": "87600h"
		},
		"profiles": {
			"www": {
				"expiry": "87600h",
				"usages": [
					"signing",
					"key encipherment",
					"server auth",
					"client auth"
				]
			}
		}
	}
}

###########################分 割 线 #############################
vim ca-csr.json

{
	"CN": "etcd CA",
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "Beijing",
			"ST": "Beijing"
		}
	]
}
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem ca.pem
  • 使用自签 CA 签发 Etcd HTTPS 证书
vim server-csr.json

{
	"CN": "etcd",
	"hosts": [
		"192.168.14.101",
		"192.168.14.102",
		"192.168.14.103"
	],
	"key": {
		"algo": "rsa",
		"size": 2048
	},
	"names": [
		{
			"C": "CN",
			"L": "BeiJing",
			"ST": "BeiJing"
		}
	]
}

注: 上述文件 hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,一个都不能少!为了方便 后期扩容可以多写几个预留的 IP

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem server.pem
  • 拷贝刚才生成的证书到配置文件中的路径
cp /usr/local/ssl/etcd_ssl/*pem /opt/etcd/ssl/
  • systemd 管理 Etcd
vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 将上面节点 所有生成的文件拷贝到 另外两个节点
scp -r /opt/etcd [email protected]:/opt/
scp -r /opt/etcd [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
  • 分别修改另外两个节点的 etcd.conf 配置文件中的节点名称和当前服务器 IP
vim /opt/etcd/cfg/etcd.conf

ETCD_NAME="etcd-1"												# 修改此处为不同的名字
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.14.101:2380" 			# 修改此处为当前服务器 IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.14.101:2379" 			# 修改此处为当前服务器 IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.14.101:2380" 	# 修改此处为当前服务器 IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.14.101:2379" 		# 修改此处为当前服务器 IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.14.101:2380,etcd-2=https://192.168.14.102:2380,etcd-3=https://192.168.14.103:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
  • 设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
  • 查看集群状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.14.101:2379,https://192.168.14.102:2379,https://192.168.14.103:2379" endpoint health

如果输出healthy: successfully 等字样, 就说明Etcd集群部署成功 。如果有问题第一步先看日志:/var/log/message

四、部署Master节点

1、部署 kube-apiserver

  • 生成 kube-apiserver自签CA证书
cd /usr/local/ssl/k8s_ssl
vim ca-config.json

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
###########################分 割 线 #############################

vim ca-csr.json

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

ls *pem
ca-key.pem ca.pem
  • 使用自签 CA 签发 kube-apiserver HTTPS 证书
vim server-csr.json

{
  "CN": "kubernetes",
  "hosts": [
    "10.0.0.1",
    "127.0.0.1",
    "192.168.14.101",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

注:上述文件 hosts 字段中 IP 为所有 Master/LB/VIP IP,一个都不能少!为了方便后期扩容 可以多写几个预留的 IP

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem server.pem
  • 创建kube-apiserver目录并解压二进制包
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
tar zxvf kubernetes-1.18.3-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubelet kube-proxy /opt/kubernetes/bin
cp kubectl /usr/bin/
  • 创建配置文件
vim /opt/kubernetes/cfg/kube-apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.14.101:2379,https://192.168.14.102:2379,https://192.168.14.103:2379 \
--bind-address=192.168.14.101 \
--secure-port=6443 \
--advertise-address=192.168.14.101 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

##########################################################################
#–-logtostderr:启用日志
#—-v:日志等级
#-–log-dir:日志目录
#--etcd-servers:etcd 集群地址
#-–bind-address:监听地址
#-–secure-port:https 安全端口
#-–advertise-address:集群通告地址
#-–allow-privileged:启用授权
#-–service-cluster-ip-range:Service 虚拟 IP 地址段
#-–enable-admission-plugins:准入控制模块
#-–authorization-mode:认证授权,启用 RBAC 授权和节点自管理
#-–enable-bootstrap-token-auth:启用 TLS bootstrap 机制
#-–token-auth-file:bootstrap token 文件
#-–service-node-port-range:Service nodeport 类型默认分配端口范围
#-–kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
#--–tls-xxx-file:apiserver https证书
#-–etcd-xxxfile:连接 Etcd 集群证书
#-–audit-log-xxx:审计日志
##########################################################################
  • 证书拷贝到配置文件中的路径
cp /usr/local/ssl/k8s_ssl/*pem /opt/kubernetes/ssl/

1.1、使用 TLS bootstraping 机制来自动颁发客户端证书

TLS Bootstraping:Master apiserver 启用 TLS 认证后,Node 节点 kubelet 和 kube-proxy 要 与 kube-apiserver 进行通信,必须使用 CA 签发的有效证书才可以,当 Node 节点很多时,这 种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。所以强烈建议在 Node 上使用这 种方式,目前主要用于 kubelet,kube-proxy 还是由我们统一颁发一个证书。

  • 创建 token 文件
cat > /opt/kubernetes/cfg/token.csv << EOF
ce571d7d7d2f0ff49456fb3469331dd0,kubelet-bootstrap,10001,"system:node-bootstrappe"
EOF

#格式:token,用户名,UID,用户组
#token 也可用下面命令自行生成替换
head -c 16 /dev/urandom | od -An 
  • systemd 管理 kube-apiserver
vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver

1.2、 授权 kubelet-bootstrap 用户允许请求证书

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

2、 部署 kube-controller-manager

  • 创建配置文件
vim /opt/kubernetes/cfg/kube-controller-manager.conf

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"

########################################################################################################
#–-master:通过本地非安全本地端口 8080 连接 apiserver。
#-–leader-elect:当该组件启动多个时,自动选举(HA)
#-–cluster-signing-cert-file/–-cluster-signing-key-file:自动为 kubelet 颁发证书的 CA,与 apiserver 保持一致
########################################################################################################
  • systemd 管理 kube-controller-manager
vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager

3、 部署 kube-scheduler

  • 创建配置文件
vim /opt/kubernetes/cfg/kube-scheduler.conf

KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"

######################################################
#-–master:通过本地非安全本地端口 8080 连接 apiserver。
#-–leader-elect:当该组件启动多个时,自动选举(HA)
######################################################
  • systemd 管理 kube-scheduler
vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
  • 查看集群状态
kubectl get cs

NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

如上输出说明 Master 节点组件运行正常

  • 为master添加禁止调度污点,不让pod调度到master上
kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule		#添加污点

kubectl describe node k8s-master | grep Taints									#查看
Taints:             node-role.kubernetes.io/master:NoSchedule

#容忍调度yaml示例
tolerations:
- key: node-role.kubernetes.io/master
  operator: Exists
  effect: NoSchedule

五、 部署 Worker节点

  • 在所有 worker node 创建工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

#从 master 节点拷贝
scp [email protected]:/opt/kubernetes/bin/{kubelet,kube-proxy} /opt/kubernetes/bin/
scp [email protected]:/opt/kubernetes/ssl/*pem /opt/kubernetes/ssl/

1、部署 kubelet

vim /opt/kubernetes/cfg/kubelet.conf

KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=192.168.14.100:5000/flannel-amd64"

####################################################################################
#-–hostname-override:显示名称,集群中唯一,不同节点记得修改
#-–network-plugin:启用 CNI
#-–kubeconfig:空路径,会自动生成,后面用于连接 apiserver
#-–bootstrap-kubeconfig:首次启动向 apiserver 申请证书
#-–config:配置参数文件
#-–cert-dir:kubelet 证书生成目录
#-–pod-infra-container-image:管理 Pod 网络容器的镜像,注意修改镜像地址为本地镜像仓库的地址
####################################################################################
  • 配置参数文件
vim /opt/kubernetes/cfg/kubelet-config.yml

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
  • 生成 bootstrap.kubeconfig 文件
vim /opt/kubernetes/cfg/bootstrap.kubeconfig

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.14.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: ce571d7d7d2f0ff49456fb3469331dd0

###############################
#token与 token.csv 里保持一致
#server是apiserver IP:PORT
###############################
  • systemd 管理 kubelet
vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

2、部署 kube-proxy

vim /opt/kubernetes/cfg/kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
  • 配置参数文件 (注意hostnameOverride对应的hostname)
vim /opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.0.0.0/24
  • 生成 kube-proxy 证书 (master节点操作)
cd /usr/local/ssl/k8s_ssl/
vim kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

scp kube-proxy*pem [email protected]:/opt/kubernetes/ssl/
scp kube-proxy*pem [email protected]:/opt/kubernetes/ssl/
  • 生成 kube-proxy.kubeconfig 文件
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.14.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
  • systemd 管理 kube-proxy
vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
  • 批准 kubelet 证书申请并加入集群 (在master上操作)
#查看 kubelet 证书请求
kubectl get csr

NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-R_LO5xq1403TuvHbUD-feenbi5H2YLX5Y2mqeqU1WTA   26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#批准申请
kubectl certificate approve node-csr-R_LO5xq1403TuvHbUD-feenbi5H2YLX5Y2mqeqU1WTA

#查看节点
kubectl get node

NAME         STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>    1m   v1.18.3
#由于网络插件还没有部署,节点会没有准备就绪 NotReady

3、 部署 CNI 网络

  • 解压二进制包并移动到默认工作目录
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
  • 应用资源配置文件(资源包路径下yaml目录下的 kube-flannel.yaml )
kubectl apply -f kube-flannel.yaml			#在master上操作,注意修改镜像地址为本地镜像仓库的地址

#查看pod
kubectl get pods -n kube-system

NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-xn4fr   1/1     Running   0          4m46s

#再次查看Node,准备就绪
kubectl get node

NAME         STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   4m2s   v1.18.3

#查看pod运在哪些节点上、状态等信息
kubectl get pod -n kube-system -o wide
  • 授权 apiserver 访问 kubelet (资源包路径下yaml目录下的 apiserver-to-kubelet-rbac.yaml)
kubectl apply -f apiserver-to-kubelet-rbac.yaml			#在master上操作

4、新增加 Worker Node

  • 拷贝已部署好的 Node 相关文件到新节点
scp -r [email protected]:/opt/kubernetes /opt/
scp -r [email protected]:/opt/cni /opt/
scp [email protected]:/usr/lib/systemd/system/{kubelet,kube-proxy}.service /usr/lib/systemd/system/
  • 删除 kubelet 证书和 kubeconfig 文件
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

#注:这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除重新生成。
  • 修改配置文件主机名
vim /opt/kubernetes/cfg/kubelet.conf

--hostname-override=k8s-node2

vim /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: k8s-node2
  • 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  • 批准 kubelet 证书申请并加入集群 (在master上操作)
#查看 kubelet 证书请求
kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-4C7LfCkvTYVxwHskjHJPyqOZ802lN8VXbWR16-Ev9rQ   11s     kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
node-csr-R_LO5xq1403TuvHbUD-feenbi5H2YLX5Y2mqeqU1WTA   7m26s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#批准申请
kubectl certificate approve node-csr-4C7LfCkvTYVxwHskjHJPyqOZ802lN8VXbWR16-Ev9rQ

#查看节点
kubectl get node			
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    <none>   5m49s   v1.18.3
k8s-node2   Ready    <none>   43s     v1.18.3

六、 部署 Dashboard 和 CoreDNS (在master上操作)

1、部署Dashboard

  • 复制woker节点上的kubelet和kube-proxy所有相关文件到master上
scp [email protected]:/opt/kubernetes/cfg/* /opt/kubernetes/cfg/
scp [email protected]:/opt/kubernetes/ssl/kube-proxy*pem /opt/kubernetes/ssl/
scp [email protected]:/usr/lib/systemd/system/{kubelet,kube-proxy}.service /usr/lib/systemd/system/
scp -r [email protected]:/opt/cni /opt/
rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
rm -f /opt/kubernetes/ssl/kubelet*

#修改配置文件名
vim /opt/kubernetes/cfg/kubelet.conf

--hostname-override=k8s-master

vim /opt/kubernetes/cfg/kube-proxy-config.yml

hostnameOverride: k8s-master

#启动并设置开机启动 
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

#查看 kubelet 证书请求
kubectl get csr

#批准申请
kubectl certificate approve node-csr-PIZOcdpw4lBn2SPfwSrYTXCPh-9ZnBawmkbtvy6AZwc

#查看节点
kubectl get node
k8s-master   Ready    <none>   2m6s   v1.18.3
k8s-node1    Ready    <none>   84m    v1.18.3
k8s-node2    Ready    <none>   79m    v1.18.3
  • 指定Dashboad的pod只部署到master节点(如果不想指定可忽略这步)
#给master节点添加标签
kubectl label node k8s-master kubernetes.dashboard.requset=enabled

#查看node包含的label
kubectl get nodes --show-labels

#修改recommended.yaml的nodeSelector指定刚添加的标签
vim recommended.yaml

nodeSelector:
  "kubernetes.dashboard.requset": enabled
  • 默认 Dashboard 只能集群内部访问,修改 Service 为 NodePort 类型,暴露到外部
#直接使用资源包路径下yaml目录下的recommended.yaml,注意修改镜像地址为本地镜像仓库的地址
vim recommended.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  type: NodePort
  selector:
    k8s-app: kubernetes-dashboard
  • 应用资源配置文件,生成pod
kubectl apply -f recommended.yaml      #注意修改镜像地址为本地镜像仓库的地址

kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-694557449d-przxl   1/1     Running   0          5s
pod/kubernetes-dashboard-5f98bdb684-v4krk        1/1     Running   0          6s

NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.239   <none>        8000/TCP        5s
service/kubernetes-dashboard        NodePort    10.0.0.81    <none>        443:30001/TCP   6s
###########################分 割 线 #############################
kubectl get pods -n kubernetes-dashboard -o wide		#查看dashboard的pod是否在master节点上
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-849c6b8f5c-qvvcs   1/1     Running   0          5m24s   10.244.1.5   k8s-master   <none>           <none>
kubernetes-dashboard-7fb54f5f74-hbxvj        1/1     Running   0          5m25s   10.244.1.6   k8s-master   <none>           <none>
  • 访问地址:https://NodeIP:30001
  • 创建 service account 并绑定默认 cluster-admin 管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

#使用输出的 token 登录 Dashboard
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

2、 部署 CoreDNS

CoreDNS 用于集群内部 Service 名称解析

  • 修改资源包路径下yaml目录下的 coredns.yaml

a、修改集群域名

data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes $DNS_DOMAIN in-addr.arpa ip6.arpa {   //$DNS_DOMAIN修改为集群的域名 ,可从kubelet-config.yml文件查看
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }

b、 修改coredns 容器资源限制

      containers:
      - name: coredns
        image: 192.168.14.100:5000/coredns   //修改下载地址为本地镜像
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: $DNS_MEMORY_LIMIT        //修改容器使用的最大内存
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]

c、 修改集群使用的dnsip

spec:
  selector:: coredns
    k8s-app: kube-dns		 //所有的k8s-app: kube-dns修改成k8s-app: coredns
  clusterIP: $DNS_SERVER_IP  //修改为集群使用的dnsip, 可从kubelet-config.yml文件中查看
  ports:
  - name: dns
    port: 53
  • 应用资源配置文件coredns.yaml
kubectl apply -f coredns.yaml		#注意修改镜像地址为本地镜像仓库的地址

#查看pod状态
kubectl get pods,svc -l k8s-app=coredns -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-856d9669c5-zmws2   1/1     Running   0          24s

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/coredns   ClusterIP   10.0.0.2    <none>        53/UDP,53/TCP,9153/TCP   24s
  • DNS解析测试
kubectl run -it --rm dns-test --image=192.168.14.100:5000/busybox sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 coredns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

#测试通过

你可能感兴趣的:(docker,运维,服务器,容器,k8s)