二进制安装多master节点的k8s集群

二进制安装多master节点的k8s集群

k8s集群角色 IP 主机名 安装的组件
控制节点 192.168.1.180 master1 apiserver,controller-manager,scheduler,etcd,docker,keepalived,nginx
控制节点 192.168.1.181 master2 apiserver,controller-manager,scheduler,etcd,docker,keepalived,nginx
控制节点 192.168.1.182 master3 apiserver,controller-manager,scheduler,etcd,docker
工作节点 192.168.1.183 node1 kubelet,kube-proxy,docker,calico,coredns
VIP 192.168.1.199

kubeadm 和二进制安装 k8s 适用场景分析

kubeadm 是官方提供的开源工具,是一个开源项目,用于快速搭建 kubernetes 集群,目前是比较方

便和推荐使用的。kubeadm init 以及 kubeadm join 这两个命令可以快速创建 kubernetes 集群。Kubeadm

初始化 k8s,所有的组件都是以 pod 形式运行的,具备故障自恢复能力。

kubeadm 是工具,可以快速搭建集群,也就是相当于用程序脚本帮我们装好了集群,属于自动部署,

简化部署操作,自动部署屏蔽了很多细节,使得对各个模块感知很少,如果对 k8s 架构组件理解不深的话,

遇到问题比较难排查。

kubeadm 适合需要经常部署 k8s,或者对自动化要求比较高的场景下使用。

二进制:在官网下载相关组件的二进制包,如果手动安装,对 kubernetes 理解也会更全面。

Kubeadm 和二进制都适合生产环境,在生产环境运行都很稳定,具体如何选择,可以根据实际项目进

行评估。

1.初始化

1.1配置主机名

#Master1
hostnamectl set-hostname master1 && bash
#Master2
hostnamectl set-hostname master2 && bash
#Master3
hostnamectl set-hostname master3 && bash
#node1
hostnamectl set-hostname node1 && bash

1.2配置host文件

#master1
cat > /etc/hosts <<END
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.180 master1
192.168.1.181 master2
192.168.1.182 master3
192.168.1.183 node1
END
scp /etc/hosts 192.168.1.181:/etc/hosts
scp /etc/hosts 192.168.1.182:/etc/hosts
scp /etc/hosts 192.168.1.183:/etc/hosts

1.3配置免密登陆

#master1
ssh-keygen
ssh-copy-id master1
ssh-copy-id master2
ssh-copy-id master3
ssh-copy-id node1
#其它所有节点同样的做法

1.4关闭防火墙和selinux

#master1
systemctl disable firewalld --now
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 
setenforce 0
#其它所有节点同样的做法

1.5关闭交换分区

#master1
swapoff -a  #临时关闭
#将/etc/fstab中的swap这行前加上#注释掉

# /etc/fstab
# Created by anaconda on Wed May 18 10:27:51 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=2489b3c2-2bbd-47e4-bf21-02e1e06a21de /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

#其它所有节点同样的做法

1.6修改内核参数

#master1
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf <<END
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
END
sysctl -p /etc/sysctl.d/k8s.conf

cat > /etc/rc.sysinit <<END
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x \$file ] && \$file
done
END
cd /etc/sysconfig/modules
cat >br_netfilter.modules <<END
modprobe br_netfilter
END
chmod 755 /etc/sysconfig/modules/br_netfilter.modules

#其它所有节点同样的做法

1.7配置软件源

#安装基础软件包,所有节点均做
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync

#配置docker的阿里云的源所有节点均做
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

1.8配置时间同步

#所有节点均做
yum install -y ntpdate
ntpdate ntp1.aliyun.com
crontab -e
* */1 * * * /usr/sbin/ntpdate ntp1.aliyun.com
systemctl restart crond

1.9安装iptables

#所有节点均做
yum install -y iptables-services
systemctl disable iptables --now
iptables -F

1.10安装docker-ce

#所有节点均做
yum install -y docker-ce docker-ce-cli containerd.io
systemctl enable docker --now
systemctl status docker

1.11配置docker镜像加速器

#所有节点均做
cat > /etc/docker/daemon.json <<END
{
  "registry-mirrors": ["https://5vrctq3v.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
END
systemctl daemon-reload
systemctl restart docker

2.搭建etcd集群

2.1配置etcd工作目录

#创建配置文件和证书文件存放目录
#master1
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl
#master2
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl
#master3
mkdir -p /etc/etcd
mkdir -p /etc/etcd/ssl

2.2安装签发证书工具cfssl

#master1
mkdir /data/work -p
cd /data/work
#cfssl-certinfo_linux-amd64 、cfssljson_linux-amd64 、cfssl_linux-amd64 上传到
#/data/work/目录下
chmod a+x cfssl*
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo


2.3 配置ca证书

#master1
#生成ca证书请求文件
cd /data/work
cat > ca-csr.json <<END
{
  "CN": "kubernetes",
  "key": {
      "algo": "rsa",
      "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ],
  "ca": {
          "expiry": "87600h"
  }
}
END
cfssl gencert -initca ca-csr.json | cfssljson -bare ca


#生成ca证书文件
cat > ca-config.json <<END
{
  "signing": {
      "default": {
          "expiry": "87600h"
        },
      "profiles": {
          "kubernetes": {
              "usages": [
                  "signing",
                  "key encipherment",
                  "server auth",
                  "client auth"
              ],
              "expiry": "87600h"
          }
      }
  }
}
END


2.4 生成etcd证书

#在master1上配置etcd证书请求,hosts的ip变成自已etcd所在节点的ip
cd /data/work
cat > etcd-csr.json <<END
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.1.180",
    "192.168.1.181",
    "192.168.1.182",
    "192.168.1.199"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [{
    "C": "CN",
    "ST": "Hubei",
    "L": "Wuhan",
    "O": "k8s",
    "OU": "system"
  }]
} 
END
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem --config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd


2.5部署etcd集群

把etcd-v3.4.13-linux-amd64.tar.gz上传到master1的/data/work/上

#master1
cd /data/work
tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz
cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
#因为master1,master2,master3要做高可用集群,因此也需要把可执行文件copy到master2和master3上
scp -r etcd-v3.4.13-linux-amd64/etcd* master2:/usr/local/bin/
scp -r etcd-v3.4.13-linux-amd64/etcd* master3:/usr/local/bin/

#master1上生成etcd.conf配置文件
cat > /etc/etcd/etcd.conf <<END
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.180:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.180:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.180:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.180:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.180:2380,etcd2=https://192.168.1.181:2380,etcd3=https://192.168.1.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

END

#master2上生成etcd.conf配置文件
cat > /etc/etcd/etcd.conf <<END
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.181:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.181:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.181:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.181:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.180:2380,etcd2=https://192.168.1.181:2380,etcd3=https://192.168.1.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

END


#master3上生成etcd.conf配置文件
cat > /etc/etcd/etcd.conf <<END
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.182:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.182:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.182:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.182:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.180:2380,etcd2=https://192.168.1.181:2380,etcd3=https://192.168.1.182:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

END

cat > etcd.service <<END
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

END

cd /data/work/
cp ca*.pem /etc/etcd/ssl/
cp etcd*.pem /etc/etcd/ssl/

cp etcd.service /usr/lib/systemd/system/

yum install -y rsync  #这条在master1,master2,master3上均执行



cd /data/work
for i in {master2,master3}
do
rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;
done

cd /data/work
for i in {master2,master3}
do
rsync -vaz etcd.service $i:/usr/lib/systemd/system/;
done

mkdir -p /var/lib/etcd/default.etcd   #这条执行master1,master2,master3 etcd集群上均执行,配置文件里要求的目录

2.6 启动etcd服务

#master1,master2,master3上分别执行如下执令
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

2.7 查看etcd集群

#在master1上查看
ETCDCTL_API=3
/usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.1.180:2379,https://192.168.1.181:2379,https://192.168.1.182:2379 endpoint health

看到如下表示已配置成功。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-XJZfEvog-1653287535025)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653103659241.png)]

3安装kubernetes组件

3.1 下载安装包

二进制包所在的 github 地址如下:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/

下载后上传到master1的/data/work/目录

cd /data/work
tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin
rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master3:/usr/local/bin
scp kubelet kube-proxy node1:/usr/local/bin/
cd /data/work

#下面三条指令三个master节点均操作
mkdir -p /etc/kubernetes/
mkdir -p /etc/kubernetes/ssl
mkdir -p /var/log/kubernetes

3.2 部署apiserver组件

#启用TLS Bootstrap机制

#启动 TLS Bootstrapping 机制
Master apiserver 启用 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签
发的有效证书才能与 apiserver 通讯,当 Node 节点很多时,这种客户端证书颁发需要大量工作,同样
也会增加集群扩展复杂度。
为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet 会以一
个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。
版权声明,本文档全部内容及版权归韩先超所有,只可用于自己学习使用,禁止私自传阅,违者依法追
责。
Bootstrap 是很多系统中都存在的程序,比如 Linux 的 bootstrap,bootstrap 一般都是作为预先配
置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同
样可以加载一个这样的配置文件,这个文件的内容类似如下形式:
 apiVersion: v1
clusters: null
contexts:
- context:
 cluster: kubernetes
 user: kubelet-bootstrap
 name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
 user: {}
#TLS bootstrapping 具体引导过程
1.TLS 作用
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 
建立连接,更不用提有没有权限向 apiserver 请求指定内容。
2. RBAC 作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);
RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,
实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组.
以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成
信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。
#kubelet 首次启动流程
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;
那么第一次启动时没有证书如何连接 apiserver ?
在 apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用
户的 Token 和 由 apiserver 的 CA 签发的用户被写入了 kubelet 所使用
的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使 用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立
TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授
权身份.
token.csv 格式: 3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
版权声明,本文档全部内容及版权归韩先超所有,只可用于自己学习使用,禁止私自传阅,违者依法追
责。
首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况
下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建
CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;
所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的
ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。稍后安装
kubelet 的时候演示。
#master1
cd /data/work
cat > token.csv <<END
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
END

cat > kube-apiserver-csr.json <<END
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.1.180",
    "192.168.1.181",
    "192.168.1.182",
    "192.168.1.183",
    "192.168.1.199",
    "10.255.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}
END

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver


#创建apiserver的配置文件
#注:
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd 集群地址
--bind-address:监听地址
--secure-port:https 安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service 虚拟 IP 地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用 RBAC 授权和节点自管理
--enable-bootstrap-token-auth:启用 TLS bootstrap 机制
--token-auth-file:bootstrap token 文件
--service-node-port-range:Service nodeport 类型默认分配端口范围
--kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
--tls-xxx-file:apiserver https 证书
--etcd-xxxfile:连接 Etcd 集群证书 – -audit-log-xxx:审计日志


#Master1
cat > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.1.180 \
  --secure-port=6443 \
  --advertise-address=192.168.1.180 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.1.180:2379,https://192.168.1.181:2379,https://192.168.1.182:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

END


#Master2
cat > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.1.181 \
  --secure-port=6443 \
  --advertise-address=192.168.1.181 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.1.180:2379,https://192.168.1.181:2379,https://192.168.1.182:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

END

#Master3
cat > /etc/kubernetes/kube-apiserver.conf <<END
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --bind-address=192.168.1.182 \
  --secure-port=6443 \
  --advertise-address=192.168.1.182 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all=true \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.255.0.0/16 \
  --token-auth-file=/etc/kubernetes/token.csv \
  --service-node-port-range=30000-50000 \
  --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \
  --etcd-cafile=/etc/etcd/ssl/ca.pem \
  --etcd-certfile=/etc/etcd/ssl/etcd.pem \
  --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
  --etcd-servers=https://192.168.1.180:2379,https://192.168.1.181:2379,https://192.168.1.182:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=4"

END


#创建服务启动文件

cat > kube-apiserver.service <<END
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
END


cp ca*.pem /etc/kubernetes/ssl
cp kube-apiserver*.pem /etc/kubernetes/ssl/
cp token.csv /etc/kubernetes/
cp kube-apiserver.service /usr/lib/systemd/system/
rsync -vaz token.csv master2:/etc/kubernetes/
rsync -vaz token.csv master3:/etc/kubernetes/

rsync -vaz kube-apiserver*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver*.pem master3:/etc/kubernetes/ssl/
rsync -vaz ca*.pem master2:/etc/kubernetes/ssl/
rsync -vaz ca*.pem master3:/etc/kubernetes/ssl/
rsync -vaz kube-apiserver.service master2:/usr/lib/systemd/system/
rsync -vaz kube-apiserver.service master3:/usr/lib/systemd/system/

#下面三行在三个master节点均操作
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

3.3 部署kubectl组件

Kubectl 是客户端工具,操作 k8s 资源的,如增删改查等。

Kubectl 操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl

会根据这个文件的配置,去访问 k8s 资源。/etc/kubernetes/admin.con 文件记录了访问的 k8s 集群,和

用到的证书。 可以设置一个环境变量 KUBECONFIG

export KUBECONFIG =/etc/kubernetes/admin.conf

这样在操作 kubectl,就会自动加载 KUBECONFIG 来操作要管理哪个集群的 k8s 资源了

也可以按照下面方法,这个是在 kubeadm 初始化 k8s 的时候会告诉我们要用的一个方法

cp /etc/kubernetes/admin.conf /root/.kube/config

这样我们在执行 kubectl,就会加载/root/.kube/config 文件,去操作 k8s 资源了

如果设置了 KUBECONFIG,那就会先找到 KUBECONFIG 去操作 k8s,如果没有 KUBECONFIG 变量,那就会使用

/root/.kube/config 文件决定管理哪个 k8s 集群的资源

#创建csr请求文件
#master1
cd /data/work

cat > admin-csr.json <<END
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "system:masters",             
      "OU": "system"
    }
  ]
}
END

#说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预 定 义 了 一 些 RBAC 使 用 的 RoleBindings , 如 cluster-admin 将 Group 
system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用 kube-apiserver 的所有 API 的权
限; O 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于
证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访
问所有 API 的权限;
注: 这个 admin 证书,是将来生成管理员用的 kube config 配置文件用的,现在我们一般建议使用 RBAC 
来对 kubernetes 进行角色权限控制,kubernetes 将证书中的 CN 字段 作为 User,O 字段作为 Group;
"O": "system:masters", 必须是 system:masters,否则后面 kubectl create clusterrolebinding 报
错。
#证书 O 配置为 system:masters 在集群内部 cluster-admin 的 clusterrolebinding 将
system:masters 组和 cluster-admin clusterrole 绑定在一起


#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson --bare admin

cp admin*.pem /etc/kubernetes/ssl



#创建 kubeconfig 配置文件,比较重要
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 
证书和自身使用的证书
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.180:6443 --kubeconfig=kube.config


cat kube.config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVYmlTd2k2S051UzYrOHR0VnIwczlWbmxVa1l3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qSXdOVEl4TURFeU9EQXdXaGNOTXpJd05URTRNREV5T0RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxONDIzSllIVGE5b05jVHk1Nk9WRGlWK29qWXZRUE8KMUtBUXlhSXBuZ0ZZRlVUcnlLNlZVTDlMUXExWjVnS0cvbVViekt1K2crQXd1cmNMRm1Fb1lOL1VNblkvYmQwRgovK1BsdzdkSE5JV1FhakxBYzVBU2J2Tkh2alF0SXlVa29kZ0VoOHhiN1VWcmtZLzhCWnl6RGV4Ujc2RzlMOWpDClV1ZHR2dU5VU0ZCcHNjclJ3Ti9jSUdoNUNuc1ljYWRlQUZ3MzVNZWRHMVFSMncwbUJHdGlUVDRxQysySVFCMlIKcXUreWhyNGIwa0JpaWxXZGIxNTRFd2JnUjJIMXdTM2ovZlFKdTF2Vzc5VENNUEVSVUU0M2VuYnI2MThEcEhRcQpzVS84SmgzOHhxbm9lSFRRbFMwdDlOUFNRVjdwQzYrd0Z1MUVnY0t0OEQ3Zjlmamc4b29iSDlFQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRkVBbFBhS0VrU3ZRS2czY1VQbElxbWdQMHloME1COEdBMVVkSXdRWU1CYUFGRUFsUGFLRWtTdlFLZzNjVVBsSQpxbWdQMHloME1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQWhnODNhMXJpeWRvRGlQMG92RGlycTh1QmN0L3lkCjh2NDhDcjE3cHpRRG9zaGRhalFRTk15UVZtSlovNFFWSlZZQzl2dFNkMW83WlFaVHVza1dML0NBaUUzSXhVT0IKQkRsdlplT2xRdWhldDMzei8yZXNFcXpOU3FMMTFyQzBuM0FTQk1Jd0Z4Yk5YT29xbFl4bUwvRGpUQm1IV2h6cwo4SXBaRzZaY2wvSkVqSGwxTmNDOXU1M294QlFURHlXQ1pJWWV1ZUF6bHRoQ2FhZzhpamZzVjF2U3BTQXdkQ2Z5CnJTdUlkWHJ3UHlNdituN2YzZXVHZkdxTmY0aG4vWDFuZWZOTFFpb095Q2RSaXZCYVlBMXN6c0NHUFZEZDB2WkwKWnFMTS9EbHU3cks1M004R09ndjZoTmR1cjMzajltaGRHdDNpTENsMkpPY1ZQNm5obktEcGdXbkcKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.1.180:6443
  name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

#设置客户端认证参数
kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

#设置上下文
kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

#设置当前上下文
kubectl config use-context kubernetes --kubeconfig=kube.config

mkdir ~/.kube -p
cp kube.config ~/.kube/config

#授权kubernetes证书访问kubelet api 权限
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

#查看集群组件状态
kubectl cluster-info
kubectl get componentstatuses
kubectl get all --all-namespaces

#同步 kubectl 文件到其他节点,在master2和master3上进行如下操作
mkdir ~/.kube/
rsync -vaz config master2:/root/.kube/
rsync -vaz config master3:/root/.kube/


#配置 kubectl 子命令补全
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash > ~/.kube/completion.bash.inc
source '/root/.kube/completion.bash.inc'
source $HOME/.bash_profile

#Kubectl 官方备忘单:
https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

3.4 部署kube-controller-manager组件

#master1上创建csr请求文件
cd /data/work
cat > kube-controller-manager-csr.json <<END
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.1.180",
      "192.168.1.181",
      "192.168.1.182",
      "192.168.1.199"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-controller-manager",
        "OU": "system"
      }
    ]
}

END

#生成证书
#master1
cd /data/work
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#创建kube-controller-manager的kubeconfig
1.设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.180:6443 --kubeconfig=kube-controller-manager.kubeconfig

2.设置客户端认证参数
kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig

3.设置上下文参数
 kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
 
4. 设置当前上下文
kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

#创建配置文件
cd /data/work/
cat > kube-controller-manager.conf <<END
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
  --secure-port=10252 \
  --bind-address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.255.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.0.0.0/16 \
  --experimental-cluster-signing-duration=87600h \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2"

END

#创建启动文件
cd /data/work
cat > kube-controller-manager.service <<END
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

END
#复制文件
cd /data/work
cp kube-controller-manager*.pem /etc/kubernetes/ssl/
cp kube-controller-manager.kubeconfig /etc/kubernetes/
cp kube-controller-manager.conf /etc/kubernetes/
cp kube-controller-manager.service /usr/lib/systemd/system/

rsync -vaz kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-controller-manager*.pem master3:/etc/kubernetes/ssl/

rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master2:/etc/kubernetes/
rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master3:/etc/kubernetes/

rsync -vaz kube-controller-manager.service master2:/usr/lib/systemd/system/
rsync -vaz kube-controller-manager.service master3:/usr/lib/systemd/system/
#启动服务(如下四行需要在master2和master3上执行)
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

3.5 部署kube-scheduler组件

#创建csr请求
#master1
cd /data/work/
cat > kube-scheduler-csr.json <<END
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.1.180",
      "192.168.1.181",
      "192.168.1.182",
      "192.168.1.199"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Hubei",
        "L": "Wuhan",
        "O": "system:kube-scheduler",
        "OU": "system"
      }
    ]
}

END
#注: hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为
system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予kube-scheduler 工作所需的权限。

#生成证书
cd /data/work/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

#创建kube-scheduler的kubeconfig
1. 设置集群参数
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.180:6443 --kubeconfig=kube-scheduler.kubeconfig
2. 设置客户端认证参数
kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
3. 设置上下文参数
kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
4. 设置当前上下文
kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
5. 创建配置文件
cd /data/work
cat > kube-scheduler.conf <<END
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

END
6. 创建服务启动文件
cd /data/work/
cat > kube-scheduler.service <<END
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

END

7.复制文件
cd /data/work/
cp kube-scheduler*.pem /etc/kubernetes/ssl/
cp kube-scheduler.kubeconfig /etc/kubernetes/
cp kube-scheduler.conf /etc/kubernetes/
cp kube-scheduler.service /usr/lib/systemd/system/

rsync -vaz kube-scheduler*.pem master2:/etc/kubernetes/ssl/
rsync -vaz kube-scheduler*.pem master3:/etc/kubernetes/ssl/

rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master3:/etc/kubernetes/

rsync -vaz kube-scheduler.service master2:/usr/lib/systemd/system/
rsync -vaz kube-scheduler.service master3:/usr/lib/systemd/system/

8. 启动服务(下面三条执令也需要在master2和master3上执行)
systemctl daemon-reload
systemctl enable kube-scheduler --now
systemctl status kube-scheduler

3.6 部署kubelet组件

导入离线镜像压缩包

#把 pause-cordns.tar.gz 上传到 node1 节点,手动解压

docker load -i pause-cordns.tar.gz

kubelet: 每个 Node 节点上的 kubelet 定期就会调用 API Server 的 REST 接口报告自身状态,API Server

接收这些信息后,将节点状态信息更新到 etcd 中。kubelet 也通过 API Server 监听 Pod 信息,从而对 Node

机器上的 POD 进行管理,如创建、删除、更新 Pod

以下操作在master1上操作
创建 kubelet-bootstrap.kubeconfig
cd /data/work/
BOOTSTRAP_TOKEN=$(awk -F ',' '{print $1}' /etc/kubernetes/token.csv)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.180:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

#创建配置文件
"cgroupDriver": "systemd"要和 docker 的驱动一致。

cd /data/work/
cat > kubelet.json <<END
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.1.183",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.255.0.2"]
}

END


cat > kubelet.service <<END
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/ssl \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet.json \
  --network-plugin=cni \
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
 
[Install]
WantedBy=multi-user.target

END

#注: –hostname-override:显示名称,集群中唯一
–network-plugin:启用 CNI 
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver 
–bootstrap-kubeconfig:首次启动向 apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像
#注:kubelet.json 配置文件 address 改为各个节点的 ip 地址,在各个 work 节点上启动服务



mkdir /etc/kubernetes/ssl -p  #这一行操作在node1上操作

#下面四行在master1上操作
cd /data/work/
scp kubelet-bootstrap.kubeconfig kubelet.json node1:/etc/kubernetes/
scp ca.pem node1:/etc/kubernetes/ssl/
scp kubelet.service node1:/usr/lib/systemd/system/

#下面操作在node1上进行
#启动 kubelet 服务
mkdir /var/lib/kubelet
mkdir /var/log/kubernetes
systemctl daemon-reload
systemctl enable kubelet --now
systemctl status kubelet

#当在node1上启动kubelet服务后,会向master节点发送一个csr请求,如下图,kubectl get csr 可以看到此请求,将得到的结果的name部分的值复制,使用kubectl certificate approve 加上刚复制的结果进行approved

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fg7ZL8W3-1653287535025)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653211773693.png)]

此时可以看到node1已加入集群,注意:STATUS 是 NotReady 表示还没有安装网络插件

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FQr4A2xe-1653287535026)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653212109235.png)]

3.7 部署 kube-proxy 组件

#创建csr请求
cd /data/work/
cat > kube-proxy-csr.json <<END
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Hubei",
      "L": "Wuhan",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

END
#生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#创建kubeconfig文件
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.1.180:6443 --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

#创建kube-proxy配置文件
cd /data/work/
cat > kube-proxy.yaml <<END
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.1.183
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.1.0/24
healthzBindAddress: 192.168.1.183:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.1.183:10249
mode: "ipvs"

END

#创建服务启动文件
cd /data/work/
cat > kube-proxy.service <<END
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target

END

#复制文件
scp kube-proxy.kubeconfig kube-proxy.yaml node1:/etc/kubernetes/
scp kube-proxy.service node1:/usr/lib/systemd/system/

#启动服务(在node1上执行)
mkdir -p /var/lib/kube-proxy
systemctl daemon-reload
systemctl enable kube-proxy --now
systemctl status kube-proxy

3.8 部署calico组件

把calico.tar.gz上传到node1节点,手动解压

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QwzO660o-1653287535026)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653214689660.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-N9iixYTw-1653287535026)(C:\Users\mack\AppData\Roaming\Typora\typora-user-images\1653214716125.png)]

#master1上操作
cd /data/work/
#上传calico.yaml到master1的/data/work/

kubectl apply -f calico.yaml

3.9 部署coredns组件

#master1
cd /data/work/
cat > coredns.yaml <<END
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.255.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
END


#master1  
#部署coredns组件
cd /data/work/
kubectl apply -f coredns.yaml

#验证coredns和k8s集群网络是否正常
上传busybox-1-28.tar.gz到node1的家目录
[root@node1 ~]# docker load -i busybox-1-28.tar.gz 
[root@node1 ~]# docker images
REPOSITORY                  TAG       IMAGE ID       CREATED         SIZE
calico/pod2daemon-flexvol   v3.18.0   2a22066e9588   14 months ago   21.7MB
calico/node                 v3.18.0   5a7c4970fbc2   15 months ago   172MB
calico/cni                  v3.18.0   727de170e4ce   15 months ago   131MB
calico/kube-controllers     v3.18.0   9a154323fbf7   15 months ago   53.4MB
coredns/coredns             1.7.0     bfe3a36ebd25   23 months ago   45.2MB
k8s.gcr.io/coredns          1.7.0     bfe3a36ebd25   23 months ago   45.2MB
k8s.gcr.io/pause            3.2       80d28bedfe5d   2 years ago     683kB
busybox                     1.28      8c811b4aec35   4 years ago     1.15MB



[root@master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping www.baidu.com
PING www.baidu.com (103.235.46.39): 56 data bytes
64 bytes from 103.235.46.39: seq=0 ttl=44 time=14.695 ms
64 bytes from 103.235.46.39: seq=1 ttl=44 time=14.617 ms
^C
--- www.baidu.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 14.617/14.656/14.695 ms
/ # 


[root@master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server:    10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local

#在busybox中能ping 通www.baidu.com说明网络没有问题,calico组件没有问题,
#nslookup kubernetes.default.svc.cluster.local可以成功表明coredns没有问题。
#注意:
busybox 要用指定的 1.28 版本,不能用最新版本,最新版本,nslookup 会解析不到 dns 和 ip,报
错如下:
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address: 10.255.0.2:53
*** Can't find kubernetes.default.svc.cluster.local: No answer
*** Can't find kubernetes.default.svc.cluster.local: No answer
10.255.0.2 就是我们 coreDNS 的 clusterIP,说明 coreDNS 配置好了。
解析内部 Service 的名称,是通过 coreDNS 去解析的。






4. 安装keepalived+nginx实现k8s apiserver高可用

4.1 安装nginx主备

[root@master1 ~]# yum install -y nginx keepalived
[root@master2 ~]# yum install -y nginx keepalived

4.2 修改nginx配置文件,主备一样

#Master1和Master2上均做
cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
cat > /etc/nginx/nginx.conf <<END
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
stream {
 log_format main '\$remote_addr \$upstream_addr - [\$time_local] \$status \$upstream_bytes_sent';
 access_log /var/log/nginx/k8s-access.log main;
 upstream k8s-apiserver {
   server 192.168.1.180:6443;    #Master1 APISERVER IP:PORT
   server 192.168.1.181:6443;    #Master2 APISERVER IP:PORT
 }
 server {
  listen 16443;  #由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
  proxy_pass k8s-apiserver;
 }
}
http {
    log_format  main  '\$remote_addr - \$remote_user [\$time_local] "\$request" '
                      '\$status \$body_bytes_sent "\$http_referer" '
                      '"\$http_user_agent" "\$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
   #include /etc/nginx/conf.d/*.conf;
    server {
        listen       80 default_server;
        server_name  _;
        location = / {
        }
    }
}

END
systemctl enable nginx --now

4.3 keepalived配置

#Master1做成主
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移) #virtual_ipaddress:虚拟 IP(VIP)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf <<END
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192    #修改为实际网卡名
    virtual_router_id 51  #VRRP路由ID实例,每个实例是唯一的
    priority 100   #优先级,备服务器设置90
    advert_int 1    #指定VRRP心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #虚拟IP
    virtual_ipaddress {
        192.168.1.199/24
    }
    track_script {
       check_nginx
    }
}

END

#脚本
#注:keepalived 根据脚本返回状态码(0 为工作不正常,非 0 正常)判断是否故障转移。
下面两种脚本取其一即可,第一种可能会误报,用第二种好。
cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash 
count=\$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|\$\$") 
if [ "\$count" -eq 0 ];then 
 systemctl stop keepalived 
fi
END

cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash
#1,判断Ngnix是否存活
counter=\`ps -C nginx --no-header | wc -l\`
if [ \$counter -eq 0 ];then
      #2,如果不存活则尝试启动nginx
      systemctl start nginx
      sleep 2
      #3,等待2秒后再次获取一次nginx状态
      counter=\`ps -C nginx --no-header | wc -l\`
      #4,再次进行判断,如nginx还不存活则停止keepalived,让地址进行漂移
      if [ \$counter -eq 0 ]; then
          systemctl stop keepalived
      fi
fi

END

chmod a+x /etc/keepalived/check_nginx.sh

systemctl enable keepalived --now
#Master2做成备
#vrrp_script:指定检查 nginx 工作状态脚本(根据 nginx 状态判断是否故障转移) #virtual_ipaddress:虚拟 IP(VIP)
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
cat > /etc/keepalived/keepalived.conf <<END
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}

vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens192    #修改为实际网卡名
    virtual_router_id 51  #VRRP路由ID实例,每个实例是唯一的
    priority 90   #优先级,主服务器设置100
    advert_int 1    #指定VRRP心跳包通告间隔时间,默认1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    #虚拟IP
    virtual_ipaddress {
        192.168.1.199/24
    }
    track_script {
       check_nginx
    }
}
END

#脚本
#注:keepalived 根据脚本返回状态码(0 为工作不正常,非 0 正常)判断是否故障转移。
下面两种脚本取其一即可,第一种可能会误报,用第二种好。
cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash 
count=\$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|\$\$") 
if [ "\$count" -eq 0 ];then 
 systemctl stop keepalived 
fi
END

cat > /etc/keepalived/check_nginx.sh <<END
#!/bin/bash
#1,判断Ngnix是否存活
counter=\`ps -C nginx --no-header | wc -l\`
if [ \$counter -eq 0 ];then
      #2,如果不存活则尝试启动nginx
      systemctl start nginx
      sleep 2
      #3,等待2秒后再次获取一次nginx状态
      counter=\`ps -C nginx --no-header | wc -l\`
      #4,再次进行判断,如nginx还不存活则停止keepalived,让地址进行漂移
      if [ \$counter -eq 0 ]; then
          systemctl stop keepalived
      fi
fi

END


chmod a+x /etc/keepalived/check_nginx.sh

systemctl enable keepalived --now

4.4 修改node配置

目前所有的 Worker Node 组件连接都还是 xianchaomaster1 Node,如果不改为连接 VIP 走负载均衡

器,那么 Master 还是单点故障。

因此接下来就是要改所有 Worker Node(kubectl get node 命令查看到的节点)组件配置文件,由

原来 192.168.40.180 修改为 192.168.40.199(VIP)。

在所有 Worker Node 执行:

sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet-bootstrap.kubeconfig
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet.json

sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet.kubeconfig
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kube-proxy.yaml
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kube-proxy.kubeconfig


systemctl restart kubelet kube-proxy

P路由ID实例,每个实例是唯一的
priority 90 #优先级,主服务器设置100
advert_int 1 #指定VRRP心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
#虚拟IP
virtual_ipaddress {
192.168.1.199/24
}
track_script {
check_nginx
}
}
END

#脚本
#注:keepalived 根据脚本返回状态码(0 为工作不正常,非 0 正常)判断是否故障转移。
下面两种脚本取其一即可,第一种可能会误报,用第二种好。
cat > /etc/keepalived/check_nginx.sh < #!/bin/bash
count=$(ps -ef |grep nginx | grep sbin | egrep -cv “grep|$$”)
if [ “$count” -eq 0 ];then
systemctl stop keepalived
fi
END

cat > /etc/keepalived/check_nginx.sh < #!/bin/bash
#1,判断Ngnix是否存活
counter=`ps -C nginx --no-header | wc -l`
if [ $counter -eq 0 ];then
#2,如果不存活则尝试启动nginx
systemctl start nginx
sleep 2
#3,等待2秒后再次获取一次nginx状态
counter=`ps -C nginx --no-header | wc -l`
#4,再次进行判断,如nginx还不存活则停止keepalived,让地址进行漂移
if [ $counter -eq 0 ]; then
systemctl stop keepalived
fi
fi

END

chmod a+x /etc/keepalived/check_nginx.sh

systemctl enable keepalived --now


### 4.4 修改node配置

目前所有的 Worker Node 组件连接都还是 xianchaomaster1 Node,如果不改为连接 VIP 走负载均衡 

器,那么 Master 还是单点故障。 

因此接下来就是要改所有 Worker Node(kubectl get node 命令查看到的节点)组件配置文件,由 

原来 192.168.40.180 修改为 192.168.40.199(VIP)。 

在所有 Worker Node 执行: 

```bash
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet-bootstrap.kubeconfig
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet.json

sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kubelet.kubeconfig
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kube-proxy.yaml
sed -i 's/192.168.1.180:6443/192.168.1.199:16443/' /etc/kubernetes/kube-proxy.kubeconfig


systemctl restart kubelet kube-proxy

你可能感兴趣的:(kubernetes,docker,容器)