Kubernetes基础知识与k8s多节点案例配置

文章目录

  • 一、Kubernetes
    • 1. Kubernetes概述
    • 2. Kubernetes特性
    • 3. Kubernetes核心概念
  • 二、Kubernetes集群架构和组件
    • 1. Master组件
    • 2. Node组件
  • 三、Flannel
    • 1. Flannel概述
    • 2. Flannel核心概念
    • 3. VXLAN模式
  • 四、kubernetes部署示例
    • 1.案例环境
    • 2.配置步骤
      • etcd部署
      • master部署
      • Node节点部署
      • master02部署
      • 负载均衡部署

一、Kubernetes

1. Kubernetes概述

  Kubernetes是Google在2014年开源的一个容器集群管理系统,简称k8s。k8s用于容器化应用程序的部署、拓展和管理。k8s提供容器编排、资源调度、弹性伸缩、部署管理、服务发现等一系列功能。使部署容器化应用更加简单高效。

  • 容器编排:类似compose,可批量创建镜像与容器。
  • 资源调度:可由系统自动分配或是由自己指定。
  • 弹性伸缩:在和虚拟机的对比中,谈到过容器的启动时间是ms级别的,所以适合在短时间内创建大量容器部署相应服务。
  • 部署管理:对资源状态的指定,有五种状态,其中的无状态和有状态为重点。
  • 服务发现:ETCD类似数据库,具有服务发现功能,在其中会记录着对容器的相关操作及大量数据,由apiserver控制;在生产环境中需要至少三台的备份。

2. Kubernetes特性

  • 自我修复

    在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死健康检查失败的容器,并且在为准备好之前不会处理客户端的请求,确保线上服务不中断。

  • 弹性伸缩

    使用指令、UI或基于CPU等资源的使用情况,自动快速扩容和缩容应用实例,应对业务高峰时的高并发和业务低峰时的回收资源,节约资源成本。

  • 自动部署和回滚

    k8s采用滚动更新机制更新应用,一次更新一个pod(一般对应一个docker),而不是同时删除所有pod,如果更新过程中出现问题,将回滚更改,确保升级不受影响。

  • 服务发现和负载均衡

    k8s为多个容器提供一个统一访问的入口(内部IP地址和一个DNS名称),并且负载均衡关联到所以容器,使用户无需考虑容器IP的问题。

  • 机密和配置管理

    管理机密数据和应用程序配置,而不需要把敏感数据暴露在镜像里,增强数据的安全性。并可以将一些常用的配置存储在k8s中,方便应用程序使用。

  • 存储编排

    挂载外部存储系统,无论是来自本地存储、公有云(如AWS)、还是网络存储(NFS、GlusterFS、Ceph)都作为集群资源的一部分使用,极大提高了存储使用的灵活性。

  • 批处理

    提供一次性任务,定时任务;满足批量数据处理和分析的场景。

    说明:

    滚动更新:用新容器逐个替换旧容器 因为逐个替换是一个过程, 所以会出现访问到新旧容器的情况

    蓝绿部署:两大可用区,轮流 不停歇,就像海豚的的左右脑,一边休眠一边工作

    灰度部署:就是上面所说的可用区滚动更新,由新容器逐个替换旧容器,从头到尾再从头到尾,不停歇

3. Kubernetes核心概念

  • Pod

    pod是k8s最小部署单元,是一组容器的集合,一个pod中的容器共享网络命名空间,pod是短暂的。

  • Controllers

    ReplicaSet:确保预期的Pod副本数量

    Deployment:无状态应用部署

    StatefulSet:有状态应用部署

    DaemonSet:确保所有Node运行在同一个Pod

    Job:一次性任务

    Cronjob:定时任务

    更高级层次对象,部署和管理Pod

  • Service

    防止Pod失联

    定义一组Pod的访问策略,若容器服务部署完成,没有service,用户无法访问,就像端口映射,需要提供访问的端口一样

  • Label:标签,附加到某个资源上,用于关联对象、查询和筛选

  • Namespaces:命名空间,将对象逻辑上隔离

  • Annotations:注释

二、Kubernetes集群架构和组件

1. Master组件

  • kube-apiserver

    Kubernetes API,做为集群统一入口,各组件的协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。

  • kube-controller-manager

    处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的

  • kube-scheduler

    根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。

    需要说明的是:不是所有的服务分配都需要经过scheduler,当指定过服务部署在特定的节点上的时候就可以不用经过scheduler分配

  • etcd

    分布式键值存储系统。用于保存集群状态数据,比如Pod、service等对象信息。

2. Node组件

  • kubelet

    kubelet是Master在Node节点上的Agent,管理本机运行容器的声明周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。

  • kube-proxy

    在Node节点上实现Pod网络代理,维护网络规则和四次负载均衡工作,客户端的入口。

  • docker或rocket

    容器引擎,运行容器。
    Kubernetes基础知识与k8s多节点案例配置_第1张图片

三、Flannel

1. Flannel概述

  flannel是针对Kubernetes设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用kubernetes的主机拥有一个完整的子网。flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,使不同节点上的docker可以相互通信,并借助etcd维护网络的分配情况。

  fannel会在寄主机上创建一个flannel.1或类似的网卡设备和创建一系列路由表规则。

2. Flannel核心概念

  • Overlay Network

    覆盖网络,在基础网络的基础上叠加的一种虚拟网络技术,该网络中的主机可以通过虚拟链路连接起来。

  • VXLAN

  flannel中实现数据转发的方式之一,将源数据包封装在UDP网络中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发给目标地址。

  • Flannel

    是Overlay网络的一种,也是将源数据包封装在另一种网络包中进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE等数据转发方式。

  • flannel0设备

    负责在操作系统内核和用户应用程序之间传递IP包

    • 内核态向用户态流动

      当操作系统将一个IP包发送给flannel0设备后,flannel0设备就会把这个IP包交给创建这个设备的应用程序,也就是flannel进程

    • 用户态向内核流动

      flannel进程向flannel0设备发送一个IP包,这个IP包就会出现在宿主机的网络栈中,然后根据宿主机上的路由表规则处理.

  • Flannel子网

    每台宿主机都会被flannel分配一个单独的子网段,每台宿主机上的所有容器地址都是这个子网段中的IP地址,子网信息和宿主机的对应关系都保存在etcd中,必须建立docker0和flannel的关系,这样docker0网桥的地址范围将会变成flannel的子网范围。

3. VXLAN模式

Kubernetes基础知识与k8s多节点案例配置_第2张图片
Kubernetes基础知识与k8s多节点案例配置_第3张图片

四、kubernetes部署示例

1.案例环境

负载均衡

  • Nginx01:20.0.0.50/24
  • Nginx02:20.0.0.60/24

Master节点

  • master01:20.0.0.10/24
  • master02:20.0.0.20/24

Node节点

  • node01:20.0.0.30/24
  • node02:20.0.0.40/24

2.配置步骤

etcd部署

etcd证书
由于此部分生成证书和相关文件较多,而且一些指令中证书文件用的是相对路径,所以要注意执行时所在的工作目录

[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# su
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# rz -E
rz waiting to receive.

//这两个文件是写好的,etcd-cert.sh里面是ca证书制作的一些代码后面可以拿出来用的
//etcd.sh是etcd的配置文件和启动脚本,后面需要直接执行的

[root@master k8s]# ls
etcd-cert.sh  etcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
[root@master k8s]# cd /usr/local/bin
[root@master bin]# rz -E
rz waiting to receive.

//这几个是证书制作的工具需要下载的
//下载地址:https://pkg.cfssl.org/R1.2/cfssl_linux-amd64,https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64,https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
//cfssl是生成证书工具,cfssljson是通过传入json文件来生成证书,cfssl-certinfo查看证书信息

[root@master bin]# ls
cfssl  cfssl-certinfo  cfssljson
[root@master bin]# chmod +x *
[root@master bin]# cd 
[root@master ~]# cd k8s/
[root@master k8s]# ls
etcd-cert  etcd.sh

//ca证书定义,在etcd-cert.sh中有代码

[root@master k8s]# cat > ca-config.json < ca-config.json <

//ca证书签名

[root@master k8s]# cat > ca-csr.json <

//生成ca秘钥和证书

[root@master k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 16:17:31 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:17:31 [INFO] generate received request
2020/09/28 16:17:31 [INFO] received CSR
2020/09/28 16:17:31 [INFO] generating key: rsa-2048
2020/09/28 16:17:31 [INFO] encoded CSR
2020/09/28 16:17:31 [INFO] signed certificate with serial number 225437059867776436062700610309289006313657657183
[root@master k8s]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert  etcd.sh

//服务端证书,指定etcd群集成员

[root@master k8s]# cat > server-csr.json <

//生成ETCD证书 server-key.pem server.pem

[root@master k8s]# ls
ca-config.json  ca-csr.json  ca.pem     etcd.sh     server-csr.json  server.pem
ca.csr          ca-key.pem   etcd-cert  server.csr  server-key.pem
[root@master k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/28 16:20:51 [INFO] generate received request
2020/09/28 16:20:51 [INFO] received CSR
2020/09/28 16:20:51 [INFO] generating key: rsa-2048
2020/09/28 16:20:51 [INFO] encoded CSR
2020/09/28 16:20:51 [INFO] signed certificate with serial number 692180165096155002840320772719909924938206748479
2020/09/28 16:20:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s]# ls
ca-config.json  ca-csr.json  ca.pem     etcd.sh     server-csr.json  server.pem
ca.csr          ca-key.pem   etcd-cert  server.csr  server-key.pem

[root@master k8s]# mv ca* etcd-cert/
[root@master k8s]# ls
etcd-cert  etcd.sh  server.csr  server-csr.json  server-key.pem  server.pem
[root@master k8s]# mv server* etcd-cert/
[root@master k8s]# ls
etcd-cert  etcd.sh

安装etcd

[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz 
[root@master k8s]# cd etcd-v3.3.10-linux-amd64/
[root@master etcd-v3.3.10-linux-amd64]# ls
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md

//创建三个目录来同一存放我们刚刚生成或即将生成的文件,cfg中存放配置文件,bin下放指令,ssl放证书

[root@master etcd-v3.3.10-linux-amd64]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@master etcd-v3.3.10-linux-amd64]# cp etcd /opt/etcd/bin/
[root@master etcd-v3.3.10-linux-amd64]# cp etcdctl /opt/etcd/bin/
[root@master etcd-v3.3.10-linux-amd64]# cd ../etcd-cert/
[root@master etcd-cert]# ls
ca-config.json  ca-csr.json  ca.pem        server.csr       server-key.pem
ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem
[root@master etcd-cert]# cp *.pem /opt/etcd/ssl/
[root@master etcd-cert]# cd ..

//因为还没有配置Node所以会进入卡着的状态,可以另开一个终端看看etcd的进程,要是Node节点的防火墙没关的话,或者规则没配的话,也会报错
//执行etcd.sh这个脚本,会生成etcd配置文件etcd,存放位置为/opt/etcd/cfg/;还会生成启动脚本etcd.service存放位置/usr/lib/systemed/system/

[root@master k8s]# bash etcd.sh etcd01 20.0.0.20 etcd02=https://20.0.0.30:2380,etcd03=https://20.0.0.40:2380

//拷贝证书去其他节点,因为加入群集是需要证书认证的

[root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt/
[root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt

//拷贝启动脚本其他节点

[root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

//修改从master上修改拷贝过去的配置文件
node01上的配置文件中需要修改的配置如下

ETCD_NAME="etcd02"
ETCD_LISTEN_PEER_URLS="https://20.0.0.30:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.30:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.30:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.30:2379"

node02上的配置文件中需要修改的配置如下

ETCD_NAME="etcd03"
ETCD_LISTEN_PEER_URLS="https://20.0.0.40:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.40:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.40:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.40:2379"

//在两node节点上启动

[root@node01 ssl]# systemctl start etcd
[root@node01 ssl]# systemctl enable etcd

//在master上检查集群健康状态

[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379" cluster-health
member 60bc7e36f63b965 is healthy: got healthy result from https://20.0.0.30:2379
member 2cc2add1558dd1c9 is healthy: got healthy result from https://20.0.0.40:2379
member e3197fd6a5933614 is healthy: got healthy result from https://20.0.0.20:2379
cluster is healthy

//再次执行脚本即可完成etcd群集部署

[root@master k8s]# bash etcd.sh etcd01 20.0.0.20 etcd02=https://20.0.0.30:2380,etcd03=https://20.0.0.40:2380

Node节点docker部署

所有node节点部署docker引擎

Node节点flannel网络配置
网段分配
//写入分配的网段到etcd中,供flannel使用,因为要使用到相关证书,下列的命令没有使用绝对路径,所有你要在有相关证书的路径下执行

[root@node01 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20,https://20.0.0.30:2379,https://20.0.0.40:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
输出以下信息
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

//查看写入的信息,可在所有node节点中查看

[root@node02 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20,https://20.0.0.30,https://20.0.0.40:2379" get /coreos.com/network/config

{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

安装flannel

//所有Node节点准备 flannel-v0.10.0-linux-amd64.tar.gz安装包

[root@node01 ~]# ls
anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz

//所有node节点解压

[root@lnode01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md

//创建一个k8s的工作目录用来存放相关文件

[root@node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

//编写flannel脚本,用来定义配置文件和启动脚本
//脚本中ETCD_ENDPOINTS中定义的地址,指向etcd,由于本次案例中不管是master还是node上都是etcd集群中的一员,所以填上了127.0.0.1

[root@node01 ~]# vim flannel.sh
#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat </opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat </usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

//开启flannel网络功能

[root@lnode01 ~]# bash flannel.sh https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379

//配置docker连接flannel

[root@node01 ~]# vim /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

[root@node01 ~]# cat /run/flannel/subnet.env 
DOCKER_OPT_BIP="--bip=172.17.3.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//之后docker0的地址就会变成172.17.3.1
DOCKER_NETWORK_OPTIONS=" --bip=172.17.3.1/24 --ip-masq=false --mtu=1450"

//重启docker服务

[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl restart docker

//查看flannel网络

[root@node01 ~]# ifconfig
docker0: flags=4099  mtu 1500
        inet 172.17.3.1  netmask 255.255.255.0  broadcast 172.17.3.255
        inet6 fe80::42:a6ff:fedf:8b9d  prefixlen 64  scopeid 0x20
        ether 02:42:a6:df:8b:9d  txqueuelen 0  (Ethernet)
        RX packets 7775  bytes 314728 (307.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16038  bytes 12470574 (11.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163  mtu 1450
        inet 172.17.3.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::740b:6dff:fe85:b995  prefixlen 64  scopeid 0x20
        ether 76:0b:6d:85:b9:95  txqueuelen 0  (Ethernet)
        RX packets 7  bytes 588 (588.0 B)
...

在node01和node02配置时差不多的,配置完成后可以在两个node上都创建一个容器,进入容器查看网卡,后互ping,能通即可

master部署

证书准备

[root@master ~]# cd k8s/
[root@master k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz

//创建目录存放后面生成的文件

[root@master k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}

//创建目录,存放master所需的证书和文件

[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# rz -E
rz waiting to receive.
[root@master k8s-cert]# ls
k8s-cert.sh

//ca证书定义

[root@master k8s-cert]# cat > ca-config.json < {
> "signing": {
>  "default": {
>    "expiry": "87600h"
>  },
>  "profiles": {
>    "kubernetes": {
>       "expiry": "87600h",
>       "usages": [
>          "signing",
>          "key encipherment",
>          "server auth",
>          "client auth"
>      ]
>    }
>  }
> }
> }
> EOF
 [root@master k8s-cert]# ls
ca-config.json  k8s-cert.sh

//ca签名

[root@master k8s-cert]# cat > ca-csr.json < {
>  "CN": "kubernetes",
>  "key": {
>      "algo": "rsa",
>      "size": 2048
>  },
>  "names": [
>      {
>          "C": "CN",
>          "L": "Beijing",
>          "ST": "Beijing",
>        "O": "k8s",
>          "OU": "System"
>      }
>  ]
> }
> EOF
> [root@master k8s-cert]# ls
> ca-config.json  ca-csr.json  k8s-cert.sh

//ca秘钥与证书制作

[root@master k8s-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/29 15:12:35 [INFO] generating a new CA key and certificate from CSR
2020/09/29 15:12:35 [INFO] generate received request
2020/09/29 15:12:35 [INFO] received CSR
2020/09/29 15:12:35 [INFO] generating key: rsa-2048
2020/09/29 15:12:35 [INFO] encoded CSR
2020/09/29 15:12:35 [INFO] signed certificate with serial number 3593449326719768921682602612991420656487961
[root@master k8s-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  k8s-cert.sh

//服务器证书

[root@master k8s-cert]# cat > server-csr.json < {
>  "CN": "kubernetes",
>  "hosts": [
>    "10.0.0.1",
>    "127.0.0.1",
>    "20.0.0.10",
>    "20.0.0.20",
>    "20.0.0.8",
>    "20.0.0.50",
>    "20.0.0.60",
>    "kubernetes",
>    "kubernetes.default",
>    "kubernetes.default.svc",
>    "kubernetes.default.svc.cluster",
>    "kubernetes.default.svc.cluster.local"
>  ],
>  "key": {
>      "algo": "rsa",
>      "size": 2048
>  },
>  "names": [
>      {
>          "C": "CN",
>          "L": "BeiJing",
>          "ST": "BeiJing",
>          "O": "k8s",
>          "OU": "System"
>      }
>  ]
> }
> EOF
 [root@master k8s-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  k8s-cert.sh  server-csr.json

//服务器证书

[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber server-csr.json | cfssljson -bare server
2020/09/29 15:15:52 [INFO] generate received request
2020/09/29 15:15:52 [INFO] received CSR
2020/09/29 15:15:52 [INFO] generating key: rsa-2048
2020/09/29 15:15:53 [INFO] encoded CSR
2020/09/29 15:15:53 [INFO] signed certificate with serial number 4577775142977539456210654504476898126934909
2020/09/29 15:15:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
ca-config.json  ca-csr.json  ca.pem       server.csr       server-key.pem
ca.csr          ca-key.pem   k8s-cert.sh  server-csr.json  server.pem

//

[root@master k8s-cert]# cat > admin-csr.json < {
> "CN": "admin",
> "hosts": [],
> "key": {
>  "algo": "rsa",
>  "size": 2048
> },
> "names": [
>  {
>    "C": "CN",
>    "L": "BeiJing",
>    "ST": "BeiJing",
>    "O": "system:masters",
>    "OU": "System"
>  }
> ]
> }
> EOF
[root@master k8s-cert]# ls
admin-csr.json  ca.csr       ca-key.pem  k8s-cert.sh  server-csr.json  server.pem
ca-config.json  ca-csr.json  ca.pem      server.csr   server-key.pem

//

[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber admin-csr.json | cfssljson -bare admin
2020/09/29 15:18:20 [INFO] generate received request
2020/09/29 15:18:20 [INFO] received CSR
2020/09/29 15:18:20 [INFO] generating key: rsa-2048
2020/09/29 15:18:20 [INFO] encoded CSR
2020/09/29 15:18:20 [INFO] signed certificate with serial number 6947870123681991616501552650764507059877403
2020/09/29 15:18:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
admin.csr       admin-key.pem  ca-config.json  ca-csr.json  ca.pem       server.csr       server-key.pem
admin-csr.json  admin.pem      ca.csr          ca-key.pem   k8s-cert.sh  server-csr.json  server.pem

//

[root@master k8s-cert]# cat > kube-proxy-csr.json < {
> "CN": "system:kube-proxy",
> "hosts": [],
> "key": {
>  "algo": "rsa",
>  "size": 2048
> },
> "names": [
>  {
>    "C": "CN",
>    "L": "BeiJing",
>    "ST": "BeiJing",
>    "O": "k8s",
>    "OU": "System"
>  }
> ]
> }
> EOF
 [root@master k8s-cert]# ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy-csr.json  server-key.pem
admin-key.pem   ca.csr          ca.pem       server.csr           server.pem

//

[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber kube-proxy-csr.json | cfssljson -bare kube-proxy
2020/09/29 15:19:06 [INFO] generate received request
2020/09/29 15:19:06 [INFO] received CSR
2020/09/29 15:19:06 [INFO] generating key: rsa-2048
2020/09/29 15:19:06 [INFO] encoded CSR
2020/09/29 15:19:06 [INFO] signed certificate with serial number 3574700643984106503033589912926081516191700
2020/09/29 15:19:06 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr          server.pem

//

[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/

安装etcd

[root@master k8s-cert]# cd ..
[root@master k8s]# ls
etcd-cert  etcd.sh  etcd-v3.3.10-linux-amd64  etcd-v3.3.10-linux-amd64.tar.gz  k8s-cert
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz 

//

[root@master bin]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# cd /root/k8s/

//身份令牌

[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ''
 c4c16d4c 95b7f13c cc5062bf 6561224e
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
c4c16d4c95b7f13ccc5062bf6561224e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

//启动apiserver

[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# unzip master.zip 
Archive:  master.zip
  inflating: apiserver.sh            
  inflating: controller-manager.sh   
  inflating: scheduler.sh            
[root@master k8s]# ls
apiserver.sh           etcd.sh                          k8s-cert                              master.zip
controller-manager.sh  etcd-v3.3.10-linux-amd64         kubernetes                            scheduler.sh
etcd-cert              etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# bash apiserver.sh 20.0.0.20 https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemdstem/kube-apiserver.service.

//查看apiserver服务是否开启

[root@master k8s]# ps aux |grep kube

[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379 \
--bind-address=20.0.0.20 \
--secure-port=6443 \
--advertise-address=20.0.0.20 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

[root@master k8s]# netstat -antp |grep 6443
tcp        0      0 20.0.0.20:6443          0.0.0.0:*               LISTEN      17861/kube-apiserve 
tcp        0      0 20.0.0.20:36170         20.0.0.20:6443          ESTABLISHED 17861/kube-apiserve 
tcp        0      0 20.0.0.20:6443          20.0.0.20:36170         ESTABLISHED 17861/kube-apiserve 
[root@master k8s]# netstat -antp |grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      17861/kube-apiserve 

//启动scheduler

[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemdstem/kube-scheduler.service.

//启动controller-manager

[root@master k8s]# chmod +x controller-manager.sh
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/liystemd/system/kube-controller-manager.service.

//查看kubernetes群集健康状态

[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  

Node节点部署

//master上操作,拷贝一些需要的文件过去

[root@master bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password: 
kubelet                                                               100%  168MB 105.8MB/s   00:01    
kube-proxy                                                            100%   48MB  63.5MB/s   00:00    
[root@master bin]# scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password: 
kubelet                                                               100%  168MB 124.2MB/s   00:01    
kube-proxy                                                            100%   48MB  77.2MB/s   00:00    

//node01
解压node.zip,主要是两个脚本kubelet.sh和proxy.sh

[root@node01 ~]# ls
anaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  node.zip   公共  图片  音乐
core.17702       initial-setup-ks.cfg                proxy.sh   模板  文档  桌面
flannel.sh       kubelet.sh                          README.md  视频  下载
[root@node01 ~]# cat kubelet.sh
#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat </opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat </opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

cat </usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 ~]# cat proxy.sh
#!/bin/bash

NODE_ADDRESS=$1

cat </opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat </usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

//master上操作
//创建一个目录,用来存放节点kubelet和kubeproxy需要的文件

[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/

//拷贝kubeconfig.sh文件进行重命名

[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig
# #创建 TLS Bootstrapping Token
##BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
#BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008

#cat > token.csv <

//
要说明的是以上配置文件中的下面这段中的token是需要用自己的

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=c4c16d4c95b7f13ccc5062bf6561224e \
  --kubeconfig=bootstrap.kubeconfig
如何获得token信息,前面一段c4c16d4c95b7f13ccc5062bf6561224e
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
c4c16d4c95b7f13ccc5062bf6561224e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

//设置环境变量

[root@master kubeconfig]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
[root@master kubeconfig]# source /etc/profile

//查看节点健康

[root@master kubeconfig]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}   

//生成配置文件,bootstrap.kubeconfig,kube-proxy.kubeconfig

[root@master kubeconfig]#bash kubeconfig 20.0.0.20 /root/k8s/k8s-cert/
[root@master kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

//拷贝配置文件到node节点

[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg
[email protected]'s password: 
bootstrap.kubeconfig                                       100% 2163     1.1MB/s   00:00    
kube-proxy.kubeconfig                                      100% 6269     5.1MB/s   00:00    
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg
[email protected]'s password: 
bootstrap.kubeconfig                                       100% 2163     1.3MB/s   00:00    
kube-proxy.kubeconfig                                      100% 6269     8.4MB/s   00:00    

//创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)

[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

//在node1节点上操作
//执行脚本会生成配置文件kubelet kubelet.kubeconfig kubelet.config
//生成启动脚本kubelet.service,并启动kubelet

[root@node01 ~]# vim kubelet.sh

#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat </opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat </opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

cat </usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 ~]# bash kubelet.sh 192.168.195.150
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~]# ps aux |grep kube

//这时node01节点的kubelet已经启动了,那么会向master请求授权
//在master上操作,

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs   87s   kubelet-bootstrap   Pending

//同意授权

[root@master kubeconfig]# kubectl certificate approve node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs
certificatesigningrequest.certificates.k8s.io/node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs approved
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs   9m31s   kubelet-bootstrap   Approved,Issued

//查看k8s群集节点

[root@master kubeconfig]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
20.0.0.30   Ready       54s   v1.12.3

//node01启动kube-proxy,执行脚本会生成 kube-proxy,和启动脚本kube-proxy.service
//并可以启动proxy

[root@node01 ~]# vim proxy.sh

#!/bin/bash

NODE_ADDRESS=$1

cat </opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat </usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@localhost ~]# systemctl status kube-proxy.service 

node02部署

//因为node01已经部署完成,具备了启动服务所需要的文件,所以我们将其拷贝到node02上即可
//查看下文件

[root@node01 opt]# tree kubernetes/
kubernetes/
├── bin
│   ├── flanneld
│   ├── kubelet
│   ├── kube-proxy
│   └── mk-docker-opts.sh
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── flanneld
│   ├── kubelet
│   ├── kubelet.config
│   ├── kubelet.kubeconfig
│   ├── kube-proxy
│   └── kube-proxy.kubeconfig
└── ssl
    ├── kubelet-client-2020-09-30-09-42-34.pem
    ├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-09-30-09-42-34.pem
    ├── kubelet.crt
    └── kubelet.key
[root@node01 ~]# scp -r /opt/kubernetes/ [email protected]:/opt/
[root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/

//其中ssl下的文件是在node01上kubelet找master授权时master给他的授权证书,node02要有自己的,所以删除

[root@node02~]# cd /opt/kubernetes/ssl/
[root@node02 ssl]# rm -rf *

//修改复制过来的配置文件

[root@node02 cfg]# cd /opt/kubernetes/cfg

//地址要改成自己的20.0.0.40
[root@node02 cfg]# vim kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.40 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node02 cfg]# vim kubelet.config 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 20.0.0.40
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
[root@node02 cfg]# vim kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.40 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

//启动服务

[root@node02 cfg]# systemctl start kubelet.service 
[root@node02 cfg]#systemctl enable kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg]# systemctl start kube-proxy.service 
[root@node02 cfg]# systemctl enable kube-proxy.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

//master上可以看到来自node02的请求

[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc   16s   kubelet-bootstrap   Pending
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs   30m   kubelet-bootstrap   Approved,Issued

//同意请求

[root@master kubeconfig]# kubectl certificate approve node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc
certificatesigningrequest.certificates.k8s.io/node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc approved
[root@master kubeconfig]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc   74s   kubelet-bootstrap   Approved,Issued
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs   31m   kubelet-bootstrap   Approved,Issued

//查看k8s群集节点

[root@master kubeconfig]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
20.0.0.30   Ready       22m   v1.12.3
20.0.0.40   Ready       28s   v1.12.3

单节点部署完成
以上配置可以实现单master节点的结构,接着继续部署,成为多节点结构

master02部署

20.0.0.10
清空防火墙规则修改主机名master02
//将master02需要的文件都从master上复制过去

[root@master kubernetes]# scp -r /opt/kubernetes/ [email protected]:/opt/
[root@master k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
[root@master ~]# scp -r /opt/etcd/ [email protected]:/opt/

//修改复制过来的配置文件

[root@master02 cfg]# vim kube-apiserver 
#修改以下两条,改成自己的地址
--bind-address=20.0.0.10 \

--advertise-address=20.0.0.10 \

//添加环境变量,以至于可以识别相关命令

[root@master02 cfg]# vim /etc/profile

export PATH=$PATH:/opt/kubernetes/bin/
[root@master02 ~]# source /etc/profile

//获取node信息

[root@master02 ~]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
20.0.0.30   Ready       29h   v1.12.3
20.0.0.40   Ready       29h   v1.12.3

负载均衡部署

//安装nginx,两台负载均衡都是这样的

[root@nginx01 ~]# cd /etc/yum.repos.d/
[root@nginx01 yum.repos.d]# vim nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
[root@nginx01 yum.repos.d]# yum -y install nginx

//添加四层转发,转发请求至master
[root@nginx01 yum.repos.d]# vim /etc/nginx/nginx.conf
events {
    worker_connections  1024;
}
#加入以下配置
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 20.0.0.10:6443;
        server 20.0.0.20:6443;
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
[root@nginx01 yum.repos.d]# systemctl start nginx

//安装keepalived

[root@nginx01 yum.repos.d]# yum -y install keepalived
[root@nginx01 yum.repos.d]# vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived 

global_defs { 
   # 接收邮件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 邮件发送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        20.0.0.8/24 
    } 
    track_script {
        check_nginx
    } 

//nginx02上keepalived配置文件为

! Configuration File for keepalived 

global_defs { 
   # 接收邮件地址 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   # 邮件发送地址 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        20.0.0.8/24 
    } 
    track_script {
        check_nginx
    } 
}

//准备在keepalived文件中需要的脚本,用来查看nginx服务的状态,如果nginx停了那么keepalived也停止

[root@nginx01 ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@nginx01 ~]# systemctl start keepalived

//检查VIP漂移情况
VIP首先应该在nginx01上,当nginx01的nginx挂了之后应该在nginx02 上
//验证地址漂移(nginx01中使用pkill nginx,再在nginx02中使用ip a 查看)
//恢复操作(在nginx01中先启动nginx服务,再启动keepalived服务)
//nginx站点/usr/share/nginx/html

修改配置文件,使node节点指向负载均衡节点
//在之前的配置中,node节点找的是其中一台master,现在要改成找负载均衡中的VIP
//开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
//两个node节点上都要改

[root@node01 cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@node01 cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@node01 cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

//统统修改为VIP

server: https://20.0.0.8:6443
[root@node01 cfg]# systemctl restart kubelet.service 
[root@node01 cfg]# systemctl restart kube-proxy.service 

//在nginx上就可查看到node节点

[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
20.0.0.30 20.0.0.10:6443 - [01/Oct/2020:16:07:16 +0800] 200 1114
20.0.0.30 20.0.0.10:6443 - [01/Oct/2020:16:07:16 +0800] 200 1114
20.0.0.40 20.0.0.10:6443 - [01/Oct/2020:16:10:02 +0800] 200 1115
20.0.0.40 20.0.0.20:6443 - [01/Oct/2020:16:10:02 +0800] 200 1116

//在master上创建pod

[root@master02 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master02 ~]# kubectl get pods
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-cfkst   0/1     ContainerCreating   0          15s
[root@master02 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-cfkst   1/1     Running   0          37s

//查看日志
//需要授权,不然会想以下这样报错

[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-cfkst)

//授权,即可查看

[root@master02 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

//查看pod网络

[root@master02 ~]# kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE
nginx-dbddb74b8-cfkst   1/1     Running   0          11m   172.17.3.3   20.0.0.30   

//到对应的pod上可查看网页

[root@node01 ~]# curl 172.17.3.3



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

//master可查看到访问日志

[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
e>

你可能感兴趣的:(docker,kubernetes,k8s)