针不戳----Kubernetes单节点部署

针不戳----Kubernetes单master节点二进制部署

文章目录

前言

一、单节点架构

二、组件介绍

三、具体部署

3.1:部署 Etcd 集群
3.2:node 节点安装 docker 组件
3.3:flannel 网络配置
3.4:部署 master 组件
3.5:部署 node 组件
3.6:查看集群状态

前言

Kubernetes 的主要服务程序都可以通过直接运行而今只能怪文件加上启动参数完成运行。在 Kubernetes 的 Master 上需要部署 etcd、kube-apiserver、kube-controller-manager、kube-scheduler 服务进行,在工作 Node 上需要部署 docker 、kubelet 和 kube-proxy 服务进程。

将 Kubernetes 的二进制可执行文件复制到/usr/bin目录下,然后在/usr/lib/systemd/system目录下为各服务创建 systemd 的服务配置文件,这样就完成了软件的安装。要使 Kubernetes 正常工作,需要详细配置各个服务的启动参数。

一、单节点架构

  • 单节点架构图

针不戳----Kubernetes单节点部署_第1张图片

二、组件介绍

  • master节点

    • kubei-schduler:很久调度算法为新建的Pod选择一个 Node 节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同节点上
    • etcd:分布式键值存储系统,用于保存集群状态数据,比如 Pod、Service 等对象信息。
    • Kube-apiserver:Kubernetes API,集群的统一入口,各组件协调者,以 RESTful API 提供接口服务,所有对象资源的增删改查和监听操作都交给 APIServer 处理后再提交给Etcd存储。
    • kube-controller-manager:处理集中群常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
  • Node组件

    • kubelet:kubelet 是 Master 在 Node 节点上的 Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载 secret、获取容器和接电状态等工作。kubelet将每个Pod转换为一组容器。
    • kube-proxy:在 Node 节点上实现 Pod 网络代理,维护网络规划和四层负载均衡工作。
    • docker或者rocket:容器引擎,运行容器
  • 关于Etcd集群

    • Etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)仓库,遵循Apache v2许可,基于Go语言实现。

    ​ 接触过分布式系统我们应该知道,分布式系统中最基本的问题之一就是实现信息的共识,在此基础上才能实现对服务配置信息的管理、服务的发现、更新、同步,等等。而要解决这些问题,往往需要利用一套能保证一致性的分布式数据库系统。比如经典的Apache ZooKeeper项目,采用了Paxos算法来实现数据库的强一致性。

    ​ Etcd专门为集群环境设计,采用了更为简洁的Raft共识算法 ,同样可以实现数据强一致性,并支持集群节点状态管理和服务自动发现。

    Etcd目前在github.com/coreos/etcd进行维护,最新版本为3.x系列。

    受到Apache ZooKeeper项目和doozer项目(doozer是一个一致性分布式数据库实现,主要面向少量数据)的启发,Etcd在设计的时候重点考略了下面四个因素:

    • 简单:支持RESTful API和gRPC API;
    • **安全:**基于TLS方式实现安全连接访问;
    • **快速:**支持每秒一万次的并发写操作,超时控制在毫秒两集;
    • **可靠:**支持分布式结构,基于Raft算法实现一致性。

    通常情况下,用户使用Etcd可以在多个节点上启动多个实例,并将它们添加为一个集群。同一个集群中的Etcd实例将会自动保持彼此信息的一致性,这意味着分布在各个节点上的应用也将获取到一致的信息。

  • 关于CA认证

    • 在一个安全的的内网环境中,Kubernetes 的各个组件与 Master 之间可以通过kube-apiserver 非安全端口 http://:8080 进行 访问。但如果 API Server 需要对外提供服务,或者集群中的某些容器也需要访问 API Server 以获取集群中的某些信息,则更安全的做法是启用 HTTPS 安全机制。Kubernetes 提供了基于 CA 签名的双向数字证书认证凡是和简单的基于 HTTPS Base 或者Token 的认证方式,其中 CA 证书方式的安全性最高。这边我们实验中以 CA 证书的方式配置 Kubernetes 集群,要求 Master 上的kube-apiserver、kube-controller-manager、kube-scheduler 进程及各 Node 节点上的 kubelet、kube-proxy进程进行 CA 签名双向数字证书安全设置。 基于 CA 签名的双向数字证书的生成过程如下:

      • 为 kube-apiserver 生成一个数字证书,并用 CA 证书签名。
      • 为 kube-apiserver 进程配置证书相关的启动参数,包括 CA 证书 (用于验证客户端证书的签名真伪)、自己经过 CA 签名后的证书及私钥。
      • 为每个访问 Kubernetes API Server 的客户端(如 kube-controller-manager、kube-scheduler、kubelet、kube-proxy 及调用 API Server 的客户端程序 kubelet 等)进程都生成自己的证书,也都用 CA 证书签名,在相关程序的启动参数里增加 CA 证书、自己的证书等相关参数。
      • 各节点用到的证书
      组件 证书
      etcd ca.pem,server.pem,server-key.pem
      flannel ca.pem,server.pem,server-key.pem
      kube-apiserver ca.pem,server.pem,server-key.pem
      kubelet ca.pem,ca-key.pem
      kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
      kubectl ca.pem,admin-pem,admin-key.pem

三、具体部署

  • 实验环境
节点 IP地址 硬件资源 组件
master 192.168.100.10 CentOS7.4 (双核双线 4G) kube-apiserver、kube-controller-manager、kube-scheduler、etcd
node1 192.168.100.20 CentOS7.4 (双核双线 4G) kubelet、kube-proxy、docker、flnanel、etcd
node2 192.168.100.30 CentOS7.4 (双核双线 4G) kubelet、kube-proxy、docker、flannel、etcd
  • 下载 etcd 组件 Releases · etcd-io/etcd · GitHub

针不戳----Kubernetes单节点部署_第2张图片

  • 下载 Kubernetes 组件 Releases · etcd-io/etcd · GitHub

针不戳----Kubernetes单节点部署_第3张图片

3.1:部署 Etcd 集群
  • 部署 master 节点

    • 编辑etcd服务集群开启的脚本(一键部署脚本)
    [root@master ~]# mkdir k8s
    [root@master ~]# cd k8s/
    [root@master k8s]# vim etcd.sh
    
    
    #!/bin/bash
    # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
    ####位置变量指定节点名字,IP地址,和集群
    ETCD_NAME=$1
    ETCD_IP=$2
    ETCD_CLUSTER=$3
     
    WORK_DIR=/opt/etcd
    ####创建etcd的配置文件
    cat <<EOF >$WORK_DIR/cfg/etcd
    #[Member]
    ETCD_NAME="${ETCD_NAME}"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"        ####etcd内部端口
    ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"        ####etcd对外提供的端口
    #[Clustering],定义集群信息
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    EOF
    ##生成服务执行脚本
    cat <<EOF >/usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    ##定义服务之间的证书认证
    [Service]
    Type=notify
    EnvironmentFile=${WORK_DIR}/cfg/etcd
    ExecStart=${WORK_DIR}/bin/etcd \
    --name=\${ETCD_NAME} \
    --data-dir=\${ETCD_DATA_DIR} \
    --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
    --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
    --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
    --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
    --initial-cluster=\${ETCD_INITIAL_CLUSTER} \
    --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
    --initial-cluster-state=new \
    --cert-file=${WORK_DIR}/ssl/server.pem \
    --key-file=${WORK_DIR}/ssl/server-key.pem \
    --peer-cert-file=${WORK_DIR}/ssl/server.pem \
    --peer-key-file=${WORK_DIR}/ssl/server-key.pem \
    --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
    --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
    EOF
    ##开启服务
    systemctl daemon-reload
    systemctl enable etcd
    systemctl restart etcd
    
    • 编辑证书、签名、密钥的脚本(一键部署脚本)
    [root@master k8s]# vim etcd-cert.sh
    ####定义ca证书
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"     
            ]  
          } 
        }         
      }
    }
    EOF 
    ####实现证书签名
    cat > ca-csr.json <<EOF   
    {   
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    ####生产证书,生成ca-key.pem  ca.pem
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    ####指定etcd三个节点之间的通信验证
    cat > server-csr.json <<EOF
    {
        "CN": "etcd",
        "hosts": [
        "192.168.100.10",
        "192.168.100.20",
        "192.168.100.30"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    EOF
    ####生成ETCD证书 server-key.pem   server.pem
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    
    • 下载cfssl证书生成工具
    [root@master k8s]# vim cfssl.sh 
    
    curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
    curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
    curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
    
    [root@master k8s]# ll /usr/local/bin/
    总用量 18808
    -rwxr-xr-x 1 root root 10376657 1月  16 2020 cfssl
    -rwxr-xr-x 1 root root  6595195 1月  16 2020 cfssl-certinfo
    -rwxr-xr-x 1 root root  2277873 1月  16 2020 cfssljson
    
    
    • 执行脚本生成证书,并将证书移动到目录下的etcd-cert目录下便于管理
    [root@master k8s]# ./etcd-cert.sh 
    2020/11/24 19:44:30 [INFO] generating a new CA key and certificate from CSR
    2020/11/24 19:44:30 [INFO] generate received request
    2020/11/24 19:44:30 [INFO] received CSR
    2020/11/24 19:44:30 [INFO] generating key: rsa-2048
    2020/11/24 19:44:30 [INFO] encoded CSR
    2020/11/24 19:44:30 [INFO] signed certificate with serial number 179234423085770384325949193674187086340408540536
    2020/11/24 19:44:30 [INFO] generate received request
    2020/11/24 19:44:30 [INFO] received CSR
    2020/11/24 19:44:30 [INFO] generating key: rsa-2048
    2020/11/24 19:44:30 [INFO] encoded CSR
    2020/11/24 19:44:30 [INFO] signed certificate with serial number 55679034051257973314798172718159493120130262396
    
    [root@master k8s]# ls
    ca-config.json  ca-csr.json  ca.pem        etcd.sh        server.csr       server-key.pem
    ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem
    [root@master k8s]# mkdir etcd-cert
    [root@master k8s]# mv c* etcd-cert
    [root@master k8s]# mv s* etcd-cert
    [root@master k8s]# ls etcd-cert
    ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem
    
    • 将master节点组件包传到目录下
    [root@master k8s]# ls
    etcd-cert  etcd-cert.sh  etcd.sh  etcd-v3.3.10-linux-amd64.tar.gz   kubernetes-server-linux-amd64.tar.gz
    
    • 部署master节点的etcd组件
    [root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz     ####解压包
    [root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p            ####指定配置文件,命令文件,证书
    [root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/           ####复制证书文件到目录
    [root@master k8s]# bash etcd.sh etcd01 192.168.100.10 etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380
    ####打开新的会话,会发现etcd已经启动
    [root@master k8s]# ps -ef | grep etcd
    root      12465      1  3 23:04 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.100.10:2380 --listen-client-urls=https://192.168.100.10:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.100.10:2379 --initial-advertise-peer-urls=https://192.168.100.10:2380 --initial-cluster=etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
    root      12477  12344  0 23:04 pts/2    00:00:00 grep --color=auto etcd
    
  • node节点操作

    • 拷贝证书到其他的节点上
    [root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt/
    [root@master k8s]# scp -r /opt/etcd/ [email protected]:/opt/
    
    • etcd启动监本拷贝到其他的节点上
    [root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
    [root@master k8s]# scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
    
    • 在node01修改如下配置
    [root@node1 ~]# vim /opt/etcd/cfg/etcd
    
    #[Member]
    ETCD_NAME="etcd02"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.100.20:2380"          ####修改为node1的ip地址
    ETCD_LISTEN_CLIENT_URLS="https://192.168.100.20:2379"        
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.20:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.20:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    
    • 在node02修改如下配置
    #[Member]
    ETCD_NAME="etcd03"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.100.30:2380"         ####修改成node2的ip地址
    ETCD_LISTEN_CLIENT_URLS="https://192.168.100.30:2379"
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.30:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.30:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.10:2380,etcd02=https://192.168.100.20:2380,etcd03=https://192.168.100.30:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    • 最后在节点上启动etcd然后查看状态
    [root@node1 ~]# systemctl start etcd
    [root@node1 ~]# systemctl status etcd
    
3.2:node 节点安装 docker 组件
  • 安装docker源
[root@node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@node1 opt]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf    开启路由转发
[root@node1 opt]# sysctl -p
  • 安装docker
[root@node1 ~]# yum install -y docker-ce
[root@node1 ~]# systemctl start docker
[root@node1 ~]# systemctl enable docker
[root@node1 ~]# mkdir -p /etc/docker   
[root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF'         ####开启镜像加速
> {
>   "registry-mirrors": ["https://13tjalqi.mirror.aliyuncs.com"]
> }
> EOF
{
  "registry-mirrors": ["https://13tjalqi.mirror.aliyuncs.com"]
}

[root@node1 ~]# systemctl daemon-reload     ####开启进程守护
[root@node1 ~]# systemctl restart docker
3.3:flannel 网络配置
  • 在master端分配子网段到etcd中,供给flannel使用
[root@master k8s]# cd /opt/etcd/ssl/
[root@master ssl]# ls
ca-key.pem  ca.pem  server-key.pem  server.pem
####set写入,指定网络类型为vxaln
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
####用get命令查看写入的信息是否正确
[root@master ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379" get /coreos.com/network/config
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
####出现这段语句说明配置争取
  • 配置节点fannel,这边以node1为例,node2上做同样操作
[root@node1 opt]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p        ####创建工作目录,便于后续操作
[root@node1 opt]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz   ####解压
flanneld
mk-docker-opts.sh
README.md
[root@node1 opt]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/    ####将执行文件移动到指定目录下
[root@node1 opt]# 
[root@node1 opt]# vim flannel.sh                 ####编写安装脚本

#!/bin/bash

ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF
####生成flannel系统服务
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target

EOF

systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

####执行脚本,指定各个节点的ip地址
[root@node1 opt]# bash flannel.sh https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
  • 配置docker服务连接flannel
[root@node1 opt]# vim /usr/lib/systemd/system/docker.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env             ####添加环境文件,包含子网段信息
ExecStart=/usr/bin/dockerd            $DOCKER_NETWORK_OPTIONS   -H fd:// --containerd=/run/containerd/containerd.sock                  ####添加变量
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
  • 查看flannel子网段信息
[root@node1 opt]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.70.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.70.1/24 --ip-masq=false --mtu=1450"
####bip指定启动时的子网段信息
[root@node1 opt]# systemctl daemon-reload    ####重启docker服务使网段生效
[root@node1 opt]# systemctl restart docker
  • 查看网卡信息
[root@node1 opt]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.70.1  netmask 255.255.255.0  broadcast 172.17.70.255
        ether 02:42:84:20:54:c4  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.20  netmask 255.255.255.0  broadcast 192.168.100.255
        inet6 fe80::45f0:dfe5:cb28:7b57  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:62:a0:dd  txqueuelen 1000  (Ethernet)
        RX packets 1093825  bytes 1309774257 (1.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 414614  bytes 38489810 (36.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.70.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::6450:edff:fe90:4a9a  prefixlen 64  scopeid 0x20<link>
        ether 66:50:ed:90:4a:9a  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 37 overruns 0  carrier 0  collisions 0
####可以看出已经生成了一个flannel的ip地址,接下来我们可以通过ping用对方的flannel地址证明网段是连通的
[root@node1 opt]# ping 172.17.99.1    ####这是节点2的flannel地址
PING 172.17.99.1 (172.17.99.1) 56(84) bytes of data.
64 bytes from 172.17.99.1: icmp_seq=1 ttl=64 time=0.329 ms
64 bytes from 172.17.99.1: icmp_seq=2 ttl=64 time=0.974 ms
^C
--- 172.17.99.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.329/0.651/0.974/0.323 ms
[root@node1 opt]# 
  • 通过容器验证
[root@node1 opt]# docker run -it centos:7 /bin/bash
[root@d48678cdb07f /]# yum -y install net-tools
[root@d48678cdb07f /]# ifconfig
[root@d48678cdb07f /]# ping 172.17.99.2 
PING 172.17.99.2 (172.17.99.2) 56(84) bytes of data.
64 bytes from 172.17.99.2: icmp_seq=1 ttl=62 time=0.478 ms
64 bytes from 172.17.99.2: icmp_seq=2 ttl=62 time=0.466 ms
^C
--- 172.17.99.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.466/0.472/0.478/0.006 ms
####再次测试,发现2个容器之间可以互通
3.4:部署 master 组件
  • 创建kubernetes集群证书
[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# vim k8s-cert.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
      	    "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",         #####此处不声明node节点的地址,k8s自动发现node
      "192.168.100.10",    ####master1节点ip地址
      "192.168.100.40",    ####master2节点ip地址便于以后做多借点
      "192.168.100.100",   ####vip(虚拟ip)
      "192.168.100.40",    ####第一台调度器ip地址
      "192.168.100.50",    ####第二台调度器ip地址
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

#-----------------------

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin   ####admin-csr.json生成管理员的证书

#-----------------------

cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

  • 查看生成的证书,会有2张证书
[root@master k8s-cert]# ls
admin.csr       admin.pem       ca-csr.json  k8s-cert.sh          kube-proxy-key.pem  server-csr.json
admin-csr.json  ca-config.json  ca-key.pem   kube-proxy.csr       kube-proxy.pem      server-key.pem
admin-key.pem   ca.csr          ca.pem       kube-proxy-csr.json  server.csr          server.pem
  • 复制证书到指定目录
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
[root@master k8s-cert]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
  • 安装Kubernetes组件
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
etcd-cert     etcd.sh                   etcd-v3.3.10-linux-amd64.tar.gz     k8s-cert
etcd-cert.sh  etcd-v3.3.10-linux-amd64  flannel-v0.10.0-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
  • 复制执行文件到执行目录下面
[root@master k8s]# cd /root/k8s/kubernetes/server/bin
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# ls /opt/kubernetes/bin/
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
  • 创建token文件
[root@master bin]# cd /opt/kubernetes/cfg/
[root@master cfg]# ls
[root@master cfg]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')    ####随机生成序列号
[root@master cfg]# cat > token.csv << EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@master cfg]# ls
token.csv
[root@master cfg]# cat token.csv              ####查看令牌序列
c3448b11da03f11f07ade34d862fd428,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
####序列号,用户名,id,角色身份
  • 利用证书、二进制文件、token,编写apaiserver脚本,开启apiserver
[root@master cfg]# cd /root/k8s/
[root@master k8s]# vim apiserver.sh 

#!/bin/bash

MASTER_ADDRESS=$1
ETCD_SERVERS=$2

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

[root@master k8s]# bash apiserver.sh 192.168.100.10 https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
  • 检查kube进程
[root@master k8s]# ps aux | grep kube     ####查看进程
[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver     ####查看配置文件

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.100.10:2379,https://192.168.100.20:2379,https://192.168.100.30:2379 \
--bind-address=192.168.100.10 \
--secure-port=6443 \
--advertise-address=192.168.100.10 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 监听的https端口
[root@master k8s]# netstat -ntap | grep 6443
tcp        0      0 192.168.100.10:6443     0.0.0.0:*               LISTEN      15500/kube-apiserve 
tcp        0      0 192.168.100.10:59616    192.168.100.10:6443     ESTABLISHED 15500/kube-apiserve 
tcp        0      0 192.168.100.10:6443     192.168.100.10:59616    ESTABLISHED 15500/kube-apiserve 
[root@master k8s]#  netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      15500/kube-apiserve 
  • 编写scheduler服务脚本,启动服务
[root@master k8s]# vim scheduler.sh 

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

[root@master k8s]# chmod +x scheduler.sh 
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master k8s]# ps aux | grep ku        ####查看服务是否开启
  • 编写controller-manager启动脚本,开启服务
[root@master k8s]# vim controller-manager.sh 

#!/bin/bash

MASTER_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager


KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

[root@master k8s]# chmod +x controller-manager.sh 
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
  • 查看master节点状态
[root@master k8s]#  /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}  
3.5:部署 node 组件
  • 将master街上的kuber执行文件拷贝到node节点上
[root@master bin]#  scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password: 
[root@master bin]#  scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
[email protected]'s password: 
  • 在master上创建bootstrap.kubeconfig、kube-proxy kubeconfig文件,并且推送给node1和node2
[root@master bin]# cd /root/k8s/
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv    ####查看令牌序列号
c3448b11da03f11f07ade34d862fd428,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@master k8s]# mkdir kubeconfig

##指定apiserver的地址
APISERVER=$1
##指定证书路径
SSL_DIR=$2
 
# 创建kubelet bootstrapping kubeconfig
export KUBE_APISERVER="https://$APISERVER:6443"
 
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
##此处token复制于 /opt/kubernetes/cfg/token.csv
  --token=c3448b11da03f11f07ade34d862fd428 \
  --kubeconfig=bootstrap.kubeconfig
 
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
 
 
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
 
#----------------------
 
# 创建kube-proxy kubeconfig文件
 
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

[root@master kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/     ####设置环境变量
[root@master kubeconfig]# kubectl get cs                            ####查看节点状态
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}  

  • 生成配置文件在/root/k8s/kubeconfig目录中
[root@master kubeconfig]# bash kubeconfig 192.168.100.10 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@master kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig
  • 将生成的文件拷贝到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]'s password:    
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
[email protected]'s password: 

  • 创建bootstrap角色赋予权限用于连接apiserver请求签名,master通过bootstrap来管理node节点(关键)
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

  • 在node节点上编写kuberctl启动脚本
[root@node1 /]# mkdir /root/k8s
[root@node1 /]# cd /root/k8s/
[root@node1 k8s]# vim kubelet.sh 

#!/bin/bash

NODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

EOF

cat <<EOF >/opt/kubernetes/cfg/kubelet.config

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP} 
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

[root@node1 k8s]# bash kubelet.sh 192.168.100.20
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 k8s]# ps aux | grep kube            ####检查启动服务
  • 在master上同意node1的请求,颁发证书
[root@master kubeconfig]# kubectl get csr                 ####查看请求Pending说明在等待该节点办法证书
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA   111s   kubelet-bootstrap   Pending
[root@master kubeconfig]# kubectl certificate approve node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA  ####同意请求
certificatesigningrequest.certificates.k8s.io/node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA approved
[root@master kubeconfig]# kubectl get csr                ####再次查看请求,说明已经已经运行假如集群
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA   4m30s   kubelet-bootstrap   Approved,Issued
[root@master kubeconfig]# kubectl get node     ####查看群集节点,成功加入node01节点
NAME             STATUS   ROLES    AGE    VERSION
192.168.100.20   Ready    <none>   2m8s   v1.12.3
  • 在node01节点操作,编写proxy服务脚本,启动proxy服务
[root@node1 k8s]# vim proxy.sh
#!/bin/bash

NODE_ADDRESS=$1

cat <<EOF >/opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

[root@node1 k8s]# bash proxy.sh 192.168.100.20          ####启动proxy服务
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service
[root@node1 k8s]# systemctl status kube-proxy.service          ####查看proxy服务
● kube-proxy.service - Kubernetes Proxy
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2020-11-25 22:48:57 CST; 45s ago
 Main PID: 117262 (kube-proxy)
   Memory: 7.5M
   CGroup: /system.slice/kube-proxy.service
           ‣ 117262 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.100.20 --cluster-cidr=10.0.0....

11月 25 22:49:33 node1 kube-proxy[117262]: I1125 22:49:33.406578  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:34 node1 kube-proxy[117262]: I1125 22:49:34.816066  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:35 node1 kube-proxy[117262]: I1125 22:49:35.430822  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:36 node1 kube-proxy[117262]: I1125 22:49:36.829488  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:37 node1 kube-proxy[117262]: I1125 22:49:37.465849  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:38 node1 kube-proxy[117262]: I1125 22:49:38.878549  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:39 node1 kube-proxy[117262]: I1125 22:49:39.510393  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:40 node1 kube-proxy[117262]: I1125 22:49:40.891883  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:41 node1 kube-proxy[117262]: I1125 22:49:41.547500  117262 config.go:141] Calling handler.OnEndpointsUpdate
11月 25 22:49:42 node1 kube-proxy[117262]: I1125 22:49:42.927768  117262 config.go:141] Calling handler.OnEndpointsUpdate

  • 节点2操作,将节点1的配置文件复制到节点2中修改即可
[root@node1 kubernetes]# scp -r /opt/kubernetes/ [email protected]:/opt/
[email protected]'s password: 

[root@node1 kubernetes]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service [email protected]:/usr/lib/systemd/system/
[email protected]'s password: 
  • 删除复制过来的证书文件,一会儿会在node2上重新生成
[root@node2 ~]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# ls
kubelet-client-2020-11-25-20-11-54.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key
[root@node2 ssl]#  rm -rf *
[root@node2 ssl]# ls
  • 修改配置文件kubelet kubelet.config kube-proxy中配置的ip地址(三个配置文件)
[root@node2 ssl]# cd ../cfg/
[root@node2 cfg]# vim kubelet


KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.30 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

[root@node2 cfg]# vim kubelet.config 


kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.100.30
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
                 
[root@node2 cfg]# vim kube-proxy


KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.100.30 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
  • 启动服务
[root@node2 cfg]#  systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service 
[root@node2 cfg]#  systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
  • 在master节点上查看请求,并授权
[root@master kubeconfig]# kubectl get csr              ####显示node2待授权
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE   7m43s   kubelet-bootstrap   Pending
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA   3h      kubelet-bootstrap   Approved,Issued

[root@master kubeconfig]# kubectl certificate approve node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE   ####授权
certificatesigningrequest.certificates.k8s.io/node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE approved
[root@master kubeconfig]# kubectl get csr                  ####node2也获得授权
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-iRwlknujhxFumFBes00IcuFmSR0s_9WvHycn5UEBzYE   11m    kubelet-bootstrap   Approved,Issued
node-csr-yv9vvAtfVtg3XrXf0-cq6AlPbDgSPA1Klq9aSB6WxZA   3h4m   kubelet-bootstrap   Approved,Issued
3.6:查看集群状态
[root@master kubeconfig]# kubectl get node                     ####2个节点已经配置好
NAME             STATUS   ROLES    AGE    VERSION
192.168.100.20   Ready    <none>   3h1m   v1.12.3
192.168.100.30   Ready    <none>   104s   v1.12.3

后记:Kubernetes二进制安装,确实工程量比较大,这个还是单节点的状态,假如是多节点,那么安装会更繁琐,不过话说回来,master2节点上的配置,其实只要通过将master1上的配置文件加以修改,就可以完成配置。下次我会带来增加master节点的部署操作,欢迎收看。

你可能感兴趣的:(kubernetes,分布式)