二进制搭建K8S集群-1.[服务组件部署]

集群规划:
2台master(高可用) | 2台node (配置方面node要跑应用所以尽量配置高一点,master比node低一点)
因为实验环境直接启4个虚拟机吃不消,所以先搭建1台master,2台node,等后期在添加1台master扩容为高可用架构。

image.png

K8S架构:


image.png

一-系统初始化设置

master执行

1.在master添加hosts
cat >> /etc/hosts << EOF
192.168.1.222 k8s-master1
192.168.1.111 k8s-node1
192.168.1.112 k8s-node2
EOF
2.关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

3.关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

4.关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

5.根据规划设置主机名
hostnamectl set-hostname 

6. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

7. 时间同步
yum install ntpdate -y
ntpdate time.windows.com

剩下的node节点机器,除了第一步以外,都执行相同的操作。(node节点很多的情况下推荐ansible)

ansible方法:

ansible -hosts文件

[node]
192.168.1.111  name=k8s-node1
192.168.1.112  name=k8s-node2

playbook.yml

---
- hosts: node
  gather_facts: no
  tasks:
    - name: 关闭防火墙
      systemd: name=firewalld state=stopped enabled=no
    - name: 关闭sellinux
      shell: sed -i 's/enforcing/disabled/' /etc/selinux/config
    - name: 关闭swp
      shell: sed -ri 's/.*swap.*/#&/' /etc/fstab
    - name:  时间同步
      yum: name=ntpdate state=installed
    - name: 执行时间同步
      shell: ntpdate time.windows.com
    - name: 拷贝修改参数脚本
      copy: src=edit_sysctl.sh dest=/root/
    - name: 运行脚本
      shell: sh /root/edit_sysctl.sh
    - name: 设置主机名
      tags: hostname
      shell: hostnamectl set-hostname {{ name }}
      #{{ name }}为ansible内置变量,vaule代表hosts定义主机的name 变量的值

edit_sysctl.sh脚本

#!/bin/bash
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

二-搭建ECTD集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障.
注意:ectd 为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。


image.png
1.准备cfssl证书生成工具

这里是在k8s-master上执行的

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.生成etcd证书

2-1.自签证书颁发机构(CA)
创建工作目录

mkdir -p ~/TLS/{etcd,k8s}

cd TLS/etcd

自签一个CA:

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json << EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
         }
       ]
}
EOF

生成证书:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#ls *pem
#ca-key.pem  ca.pem

2-2.使用自签好的CA签发ETCD HTTPS证书
创建证书申请文件

cat > server-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.1.222",
    "192.168.1.111",
    "192.168.1.112",
"192.168.1.113"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

注意:hosts那里可以理解为可信任的etcd IP,etcd集群所有etcd都要加进去,为了扩容方便我们还可以多加几个预留IP。

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

#ls server*pem
#server-key.pem  server.pem
  1. 从Github下载ETCD二进制文件
    https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
4.部署etcd集群

先部署一个节点,在把整个etcd程序目录和系统system配置文件拷贝到其他节点 ,节省时间。这里先操作master

4-1. 创建工作目录,解压二进制包--etcd/bin

mkdir -p /root/k8s/etcd{bin,cfg,ssl}
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /root/k8s/etcd/bin/   #现将启动程序挪过去

4-2.创建etcd配置文件 ---etcd/cfg

cat > /root/k8s/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.222:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.222:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.222:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.222:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.1.222:2380,etcd-2=https://192.168.1.111:2380,etcd-3=https://192.168.1.112:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

说明:

  • ETCD_NAME:节点名称,集群中唯一
  • ETCD_DATA_DIR:数据目录
  • ETCD_LISTEN_PEER_URLS:集群通信监听地址
  • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
  • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
  • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
  • ETCD_INITIAL_CLUSTER:集群节点地址
  • ETCD_INITIAL_CLUSTER_TOKEN:集群Token
  • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

4-3. 将开始生成的etcd证书拷贝过来 --etcd/ssl

cp /root/TLS/etcd/ca*pem   /root/TLS/etcd/server*pem   /root/k8s/etcd/ssl/

4-4.将etcd配置成system系统服务

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/root/k8s/etcd/cfg/etcd.conf
ExecStart=/root/k8s/etcd/bin/etcd \
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/root/k8s/etcd/ssl/server.pem \
        --key-file=/root/k8s/etcd/ssl/server-key.pem \
        --peer-cert-file=/root/k8s/etcd/ssl/server.pem \
        --peer-key-file=/root/k8s/etcd/ssl/server-key.pem \
        --trusted-ca-file=/root/k8s/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/root/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

4-5. 启动并设置开机启动

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

注意:直接启动第一个节点上的etcd会卡着不动,这是因为其他节点还没启动,他在等待链接,超时状态。
如果启动失败,直接查看日志/var/log/message 或 journalctl -u etcd

4-6. 将etcd程序文件目录和sys管理配置文件拷贝到其他etcd节点,并修该etcd.conf 为各自的IP
机器数量多可以用ansbile

---
- hosts: node
  gather_facts: no
  tasks:
    - name: 为etcd节点创建工作目录
      file: name=/root/k8s state=directory

    - name: 拷贝etcd程序目录
      copy: src=/root/k8s/etcd dest=/root/k8s/

    - name: 拷贝system文件
      tags: sys
      copy: src=/usr/lib/systemd/system/etcd.service dest=/usr/lib/systemd/system/

修改etcd.conf(所有node都要改为自己的IP,ETCD_NAME)


image.png

最后同上一样,启动服务。

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

4.7-查看etcd集群状态(写成了sh,方便查看)

#!/bin/bash
WORKDIR=/root/k8s
server_1=192.168.1.222
server_2=192.168.1.111
server_3=192.168.1.112
ETCDCTL_API=3 $WORKDIR/etcd/bin/etcdctl --cacert=$WORKDIR/etcd/ssl/ca.pem --cert=$WORKDIR/etcd/ssl/server.pem --key=$WORKDIR/etcd/ssl/server-key.pem --endpoints="https://${server_1}:2379,https://${server_2}:2379,https://${server_3}:2379" endpoint health

etcd集群启动成功,搭建完成。


image.png

三. 安装Docker

各个节点安装docker,这里直接yum装就可以

yum install -y yum-utils
#阿里云镜像仓库
yum-config-manager \
  --add-repo \
   http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#更新yum软件包索引
yum makecache fase 
#安装docker
yum install docker-ce 
#配置阿里镜像加速器
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
#启动docker
systemctl start docker 

四. 部署Master Node

1.生成kube-apiserver证书

1-1.自签证书颁发机构(CA)

切换到开始创建的TLS目录下,cd /root/TLS/k8s

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
生成证书:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#ls *pem
#ca-key.pem  ca.pem
1-2.使用自签CA签发kube-apiserver HTTPS证书

创建证书申请文件
注意:hosts字段仍然代表可信任的主机列表,这里要写所有Master、LB/VIP IP ,一个都不能少,方便后期扩容建议多写几个预留IP,在部署前期一定要将IP 以及角色都规划好。(222,252为master ip,188,199为预留VIP ip)

cat > server-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.1.222",
      "192.168.1.252",
      "192.168.1.188",
      "192.168.1.199",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

生成证书:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#ls server*pem
#server-key.pem  server.pem
2.下载并解压二进包

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183
注意:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。

mkdir -p /root/k8s/kubernetes/{bin,cfg,ssl,logs} 
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

3.部署kube-apiserver

3-1.创建配置文件
cat > /root/k8s/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/root/k8s/kubernetes/logs \
--etcd-servers=https://192.168.1.222:2379,https://192.168.1.111:2379,https://192.168.1.112:2379 \
--bind-address=192.168.1.222 \
--secure-port=6443 \
--advertise-address=192.168.1.222 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/root/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/root/k8s/kubernetes/ssl/server.pem \
--kubelet-client-key=/root/k8s/kubernetes/ssl/server-key.pem \
--tls-cert-file=/root/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/root/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/root/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/root/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/root/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/root/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/root/k8s/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/root/k8s/kubernetes/logs/k8s-audit.log"

注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。

#配置文件解释
--logtostderr:启用日志
--v:日志等级
--log-dir:日志目录
--etcd-servers:etcd集群地址
--bind-address:监听地址
--secure-port:https安全端口
--advertise-address:集群通告地址
--allow-privileged:启用授权
--service-cluster-ip-range:Service虚拟IP地址段
--enable-admission-plugins:准入控制模块
--authorization-mode:认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件
--service-node-port-range:Service nodeport类型默认分配端口范围
--kubelet-client-xxx:apiserver访问kubelet客户端证书
--tls-xxx-file:apiserver https证书
--etcd-xxxfile:连接Etcd集群证书
--audit-log-xxx:审计日志
3-2.将前面生成的在TSL/k8s下生成的证书拷贝到配置文件路径
cp /root/TLS/k8s/server*pem  /root/TLS/k8s/ca*pem  /root/k8s/kubernetes/ssl/
3-3. 启动TLS Bootstrapping 机制

说明: TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

创建上述配置文件中token文件:
#生成token
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
#创建token文件
cat > /root/k8s/kubernetes/cfg/token.csv << EOF
54c2956999675c60a4d8fd821d68edee,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
#授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
3-4. 将apiserver加入system系统服务管理, 启动服务
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/root/k8s/kubernetes/cfg/kube-apiserver.conf
ExecStart=/root/k8s/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
image.png

4.部署kube-controller-manager

4-1.创建配置文件
cat > /root/k8s/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/root/k8s/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/root/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/root/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/root/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/root/k8s/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF

–master:通过本地非安全本地端口8080连接apiserver。
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致

4-2. 将kube-controller-manager加入系统服务, 启动服务
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/root/k8s/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/root/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
image.png

5. 部署kube-scheduler

5-1.创建配置文件
cat > /root/k8s/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/roo/k8s/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1"
EOF

--master:通过本地非安全本地端口8080连接apiserver。
--leader-elect:当该组件启动多个时,自动选举(HA)

5-2.将scheduler加入到系统服务,启动服务
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/root/k8s/kubernetes/cfg/kube-scheduler.conf
ExecStart=/root/k8s/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
image.png

6.查询集群状态

到此,Master Node上的组件全部部署完了,让我们看下集群的状态

kubectl get cs
image.png

五. 部署Worker Node

这里还是在master node 操作,222这台机器即作为master又作为node, 后面部署其他节点worker node思路跟前面etcd类似,都是将一台部署好,另外的将文件复制过去改改配置文件

1. 创建工作目录并拷贝二进制文件

在所有worker node执行 (这里master这台已经执行过了,111,112 执行)

mkdir -p /root/k8s/kubernetes/{bin,cfg,ssl,logs} 

从master节点拷贝到worker节点:

cd /root/k8s/kubernetes/server/bin
cp kubelet kube-proxy  /root/k8s/kubernetes/bin/ #本地拷贝

2.部署kubelet

2-1.创建配置文件
cat > /root/k8s/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/root/k8s/kubernetes/logs \
--hostname-override=k8s-master \
--network-plugin=cni \
--kubeconfig=/root/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/root/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/root/k8s/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/root/k8s/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

--hostname-override:显示名称,集群中唯一
--network-plugin:启用CNI
--kubeconfig:空路径,会自动生成,后面用于连接apiserver
--bootstrap-kubeconfig:首次启动向apiserver申请证书
--config:配置参数文件
--cert-dir:kubelet证书生成目录
--pod-infra-container-image:管理Pod网络容器的镜像

2-2.配置参数文件
cat > /root/k8s/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /root/k8s/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
2-3.生成bootstrap.kubeconfig文件

下面是shell命令直接运行

KUBE_APISERVER="https://192.168.1.222:6443" # apiserver IP:PORT
TOKEN="54c2956999675c60a4d8fd821d68edee" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
  --certificate-authority=/root/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
  --token=${TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user="kubelet-bootstrap" \
  --kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

执行完会在当前目录生成 bootstrap.kubeconfig文件,将他拷贝到/root/k8s/kubernetes/cfg下


image.png
2-4.将kubelet加入服务启动
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/root/k8s/kubernetes/cfg/kubelet.conf
ExecStart=/root/k8s/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
image.png

注意:
这里遇到了一个问题,开始kubelet是没起来的,通过日志发现报错:


image.png

通过docker info 发现我的docker cgroup driver是systemd(默认)。
解决方法:1.修改docker ,将daemon.json 把他的cgoup改成cgoupfs
2.修改kubelet-config.yml 将coupDriver改成systemd。 总之保证kubelet和docker引擎一直就可以。

2-5.批准kubelet证书申请并加入集群

查看kubelet证书请求

**kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-5oXsZ7Bt1OtMcHaT9_U_EApCjuUiMNnfHb74SHYyGkw   15m   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

批准申请 (最后面是证书请求的NAME)

kubectl certificate approve  node-csr-5oXsZ7Bt1OtMcHaT9_U_EApCjuUiMNnfHb74SHYyGkw

查看节点

**kubectl get node
image.png

3.部署kube-proxy

3-1.创建配置文件
cat > /root/k8s/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/root/k8s/kubernetes/logs \\
--config=/root/k8s/kubernetes/cfg/kube-proxy-config.yml"
EOF
3-2.创建参数文件
cat > /root/k8s/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /root/k8s/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master
clusterCIDR: 10.0.0.0/24
EOF
3-3.生成kube-proxy.kubeconfig文件

生成kube-proxy证书:

# 切换工作目录
cd /root/TLS/k8s

# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

#ls kube-proxy*pem
#kube-proxy-key.pem  kube-proxy.pem

生成kubeconfig文件:

KUBE_APISERVER="https://192.168.1.222:6443"

kubectl config set-cluster kubernetes \
  --certificate-authority=/root/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

拷贝到配置文件指定路径:

cp kube-proxy.kubeconfig /root/k8s/kubernetes/cfg/
3-4.将kube-proxy加入到系统服务,启动
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/root/k8s/kubernetes/cfg/kube-proxy.conf
ExecStart=/root/k8s/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
image.png

六-部署CNI网络

1.下载包
https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
2.创建目录

mkdir -p /root/k8s/cni/bin
cd /root/k8s/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz  -C  .
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
##sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0-amd64#g" kube-flannel.yml
##如果镜像地址无法访问,修改可用的镜像地址

会下载下来一个kube-flannel.yml
3.部署CNI网络

kubectl apply -f kube-flannel.yml
kubectl get pods -n kube-system
kubectl get node

Node准备就绪 (图中calico是后来自己又另外应用了一次calico网络组件产出的)


image.png

4.授权apiserver访问kubelet

cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics 
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml

新增Worker Node

1.拷贝已部署好的Node相关文件到新节点(111,112机器)

scp -r kubernetes [email protected]:/root/k8s/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
scp -r cni [email protected]:/root/k8s/
scp kubernetes/ssl/ca.pem [email protected]:/root/k8s/kubernetes/ssl/
  1. 删除kubelet证书和kubeconfig文件
    注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。
rm kubernetes/cfg/kubelet.kubeconfig
rm -f kubernetes/ssl/kubelet*

3.修改node主机名(node节点运行)

cd /root/k8s/kubernetes/cfg
vim kubelet.conf 修改hostname
--hostname-override=k8s-node1

vim kube-proxy-config.yml
hostnameOverride: k8s-node1

4.在node节点启动kubelet和kube-proxy服务

systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy
  1. 在Master节点批准新Node kubelet证书申请
kubectl get csr
kubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro
image.png

6.查看Node状态


image.png

剩下的112 节点加入集群同上,只是hostname修改一下。


image.png

七. Dashboard和CoreDNS

Dashboard : 简单UI
CoreDNS: k8s内部的DNS服务

1.部署Dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:


image.png
1-1.部署组件
kubectl apply -f kubernertes-dashboard.yaml
1-2.查看服务状态
kubectl get pods,svc -n kubernetes-dashboard
image.png
1-3.访问Dashboard:

https://192.168.1.222:30001


image.png
1-4.创建service account并绑定默认cluster-admin管理员集群角色,并创建一个token登陆
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
image.png

image.png

2.部署CoreDNS

用于集群内部service名称解析

kubectl apply -f coredns.yaml
[root@k8s-master1 k8s] kubectl get pods -n kube-system|grep dns
coredns-58d8cd457b-frkh9                  1/1     Running   0          55s

测试:

[root@k8s-master1 cfg]# kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernete
Server:    10.0.0.2

你可能感兴趣的:(二进制搭建K8S集群-1.[服务组件部署])