k8s 多master集群二进制安装部署

二进制部署k8s集群

    • 环境准备
    • 安装部署etcd
      • 制作证书
      • 安装etcd
    • 安装部署flannel组件
      • 功能作用
      • 添加网络配置到etcd
      • 集群节点安装flannel
      • 测试节点互相通信
    • 部署k8s其他组件
      • 制作证书
      • master上部署apiserver
      • 启动scheduler
      • 启动controller-manager
      • 部署kubelet
      • 部署master02

环境准备

环境准备:
k8s集群master01:192.168.245.211
k8s集群master02:192.168.245.206

k8s集群node01:192.168.245.209
k8s集群node02:192.168.245.210

etcd集群节点1:192.168.245.211
etcd集群节点2:192.168.245.209
etcd集群节点3:192.168.245.210

安装部署etcd

制作证书

下载安装cfssl证书工具到/usr/local/bin/cfssl,并赋予可执行权限,方便直接使用

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

[root@localhost k8s]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson 

命令解读

cfssl:生成证书的工具命令  
cfssljson:通过传入json文件生成证书
cfssl-certinfo:查看证书信息

为etcd的证书创建一个目录

[root@localhost k8s]# mkdir etcd-cert

创建根证书(自建CA,etcd的证书由它签发)

[root@localhost k8s]# cat > ca-config.json <<EOF
> {
     
>   "signing": {
     
>     "default": {
     
>       "expiry": "87600h"   //指定了证书的过期时间,87600h为10年
>     },
>     "profiles": {
         //可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数
>       "www": {
     
>          "expiry": "87600h",
>          "usages": [
>             "signing",    //表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
>             "key encipherment",
>             "server auth",    //表示client可以用该CA对server提供的证书进行验证
>             "client auth"     //表示server可以用该CA对client提供的证书进行验证,这里最后一个字段后面没有逗号
>         ]  
>       } 
>     }         
>   }
> }
> EOF

创建CA证书请求

[root@localhost k8s]# cat > ca-csr.json <<EOF 
> {
        
>     "CN": "etcd CA",  //Common Name,自定义,这里用于表示用于etcd认证的根证书
>     "key": {
     
>         "algo": "rsa",  //加密算法
>         "size": 2048   //加密长度,rsa最小2048
>     },
>     "names": [
>         {
     
>             "C": "CN",  //国家
>             "L": "Beijing",   //地区,城市
>             "ST": "Beijing"   //省
>         }
>     ]
> }
> EOF

签发CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssl-json -bare ca这样表示也可以

[root@localhost k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 
2020/09/28 16:11:54 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:11:54 [INFO] generate received request
2020/09/28 16:11:54 [INFO] received CSR
2020/09/28 16:11:54 [INFO] generating key: rsa-2048
2020/09/28 16:11:55 [INFO] encoded CSR
2020/09/28 16:11:55 [INFO] signed certificate with serial number 590206773442412657075217441716999534465487365861

生成的文件后续要用到,保存好:

ca-key.pem  根证书私钥
ca.pem  根证书
etcd所需要的证书:

1、Etcd对外提供服务,要有一套etcd server证书
2、Etcd各节点之间进行通信,要有一套etcd peer证书
3、Kube-APIserver访问Etcd,要有一套etcd client证书

这里全部都使用同一种证书认证

创建etcd服务器端证书请求文件

[root@localhost k8s]# cat > server-csr.json <<EOF
> {
     
>     "CN": "etcd",
>     "hosts": [   //需要指定所有etcd集群的节点ip或主机名
>     "192.168.245.211",
>     "192.168.245.209",
>     "192.168.245.210"
>     ],
>     "key": {
     
>         "algo": "rsa",
>         "size": 2048
>     },
>     "names": [
>         {
     
>             "C": "CN",
>             "L": "BeiJing",
>             "ST": "BeiJing"
>         }
>     ]
> }
> EOF

签发etcd服务端证书和私钥

[root@localhost k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/28 16:16:17 [INFO] generate received request
2020/09/28 16:16:17 [INFO] received CSR
2020/09/28 16:16:17 [INFO] generating key: rsa-2048
2020/09/28 16:16:18 [INFO] encoded CSR
2020/09/28 16:16:18 [INFO] signed certificate with serial number 621191565914440988284456398609893537198453121189
2020/09/28 16:16:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").

生成的文件后面需要用到,保存好:

server-key.pem  服务器的私钥
server.pem 服务器的数字签名证书

将所有证书和密钥都移动到etcd-cert目录中便于管理

[root@localhost k8s]# mv ca.csr ca-config.json server-csr.json server-key.pem ca-csr.json ca-key.pem ca.pem server.csr server.pem etcd-cert/

安装etcd

解压etcd

[root@localhost k8s]# tar zxfv etcd-v3.3.10-linux-amd64.tar.gz

解压完成之后将etcd和etcdctl可执行文件移动到/opt/etcd/bin/下

[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
注:etcd就是etcd server的启动命令,后面可跟各种启动参数
etcdctl主要为etcd server提供了命令行操作

拷贝etcd需要认证的证书到/opt/etcd/ssl/下

[root@localhost k8s]# mkdir /opt/etcd/{
     cfg,bin,ssl} -p

[root@localhost k8s]# ls /opt/etcd
bin  cfg  ssl

[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/

配置etcd(配置文件/opt/etcd/cfg/etcd)并启动etcd
为了方便,我们将etcd启动和system启动都做成了etcd.sh脚本,执行此脚本启动etcd服务

[root@localhost k8s]# bash etcd.sh etcd01 192.168.245.211 etcd02=https://192.168.245.209:2380,etcd03=https://192.168.245.210:2380

注:这里需要三台etcd服务同时启动,这里因为先启动了其中一台etcd节点,所以服务会卡在那里,这是正常的

把etcd相关文件全部分发到另外两个etcd集群节点供使用

[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.245.209:/opt/
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.245.210:/opt/
注:etcd认证只需要ca.pem、server-key.pem、server.pem 三个证书即可
一般情况下,K8S中证书只需要创建一次,以后在向集群中添加新节点时只要将/opt/etcd/ssl/目录下的证书拷贝到新节点上即可。

两台etcd节点修改etcd配置文件,都改成自己的名称和ip地址

[root@node1 ~]# cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_LISTEN_PEER_URLS="https://192.168.245.209:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.245.209:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.245.209:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.245.209:2379"
---------------------------------------------------------------------
[root@node2 ~]# cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_LISTEN_PEER_URLS="https://192.168.245.210:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.245.210:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.245.210:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.245.210:2379"

把etcd的system启动脚本分发到另外两台etcd群集节点上

[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.245.209:/usr/lib/systemd/system/
root@192.168.245.209's password: 
etcd.service                                                                                              100%  923     1.2MB/s   00:00    
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.245.210:/usr/lib/systemd/system/
root@192.168.245.210's password: 
etcd.service                                                                                              100%  923   749.1KB/s   00:00    
[root@localhost k8s]# 

两台节点用systemctl启动etcd

[root@node1 ~]# systemctl start etcd
[root@node1 ~]# systemctl status etcd

etcd 提供了 etcdctl 命令行工具 和 HTTP API 两种交互方法。

etcdctl命令行工具用 go 语言编写,也是对 HTTP API 的封装,日常使用起来也更容易,所以这里主要使用etcdctl命令

master上验证集群健康状态,需要带证书的参数否则会报错

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379" cluster-health
member 15239980699a4b97 is healthy: got healthy result from https://192.168.245.210:2379
member 77360e8f1ec8f43b is healthy: got healthy result from https://192.168.245.211:2379
member da0a263e34e24794 is healthy: got healthy result from https://192.168.245.209:2379
cluster is healthy
--endpoints:集群中以逗号分隔的机器地址列表
cluster-health:检查etcd集群的运行状况

其他选项参数可以etcdctl --help查看

etcd集群就部署好了

安装部署flannel组件

功能作用

Flannel是Core0S团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址,也就是为了容器之间可以互相通信,而且它提供的ip地址是通过etcd主从存储的

添加网络配置到etcd

添加flannel网络配置信息到Etcd集群

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
{
      "Network": "172.17.0.0/16", "Backend": {
     "Type": "vxlan"}}

查看写入的信息

[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379" get /coreos.com/network/config
{
      "Network": "172.17.0.0/16", "Backend": {
     "Type": "vxlan"}}
set /coreos.com/network/config 添加一条网络配置记录,这个配置将用于flannel分配给每个Docker的虚拟IP地址段
get /coreos.com/network/config 获取网络配置记录,后面不用再跟参数了

Network:用于指定Flannel地址池
Backend:用于指定数据包以什么方式转发,默认为udp模式,backend为vxlan比起预设的udp性能相对好一些

集群节点安装flannel

解压

[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld    //flanneld为主要的执行文件
mk-docker-opts.sh     //sh脚本用于生成Docker启动参数
README.md

[root@node2 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld    
mk-docker-opts.sh  
README.md

将mk-docker-opts.sh脚本移动到/opt/kubernetes/bin/便于使用

[root@node1 ~]# mkdir /opt/kubernetes/{
     cfg,bin,ssl} -p
[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

[root@node2 ~]# mkdir /opt/kubernetes/{
     cfg,bin,ssl} -p
[root@node2 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

配置启动flannel,为了使用方便,将启动命令和system启动写成了flannel.sh脚本,执行此脚本启动flannel

[root@node1 ~]# bash flannel.sh https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service

更改Docker的启动参数,使其能够使用flannel进行IP分配,以及网络通讯。

配置docker连接flannel(ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
因为启动脚本里有这一步,所以要在docker的启动文件里引用DOCKER_NETWORK_OPTIONS)

[root@node1 ~]# vim /usr/lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

flannel运行后会生成一个环境变量文件/run/flannel/subnet.env,包含了当前主机要使用flannel通讯的相关参数

[root@node1 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.5.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.5.1/24 --ip-masq=false --mtu=1450"

[root@node2 ~]# 
[root@node2 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.10.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"    
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.10.1/24 --ip-masq=false --mtu=1450"

Flannel启动过程解析:

1、从etcd中获取network的配置信息
2、划分subnet,并在etcd中进行注册
3、将子网信息记录到/run/flannel/subnet.env中

测试节点互相通信

node1节点分配到的ip为172.17.5.1

[root@node1 ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.5.1  netmask 255.255.255.0  broadcast 172.17.5.255
        ether 02:42:0d:2f:33:a7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.245.209  netmask 255.255.255.0  broadcast 192.168.245.255
        inet6 fe80::20c:29ff:fea4:e64b  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:a4:e6:4b  txqueuelen 1000  (Ethernet)
        RX packets 805062  bytes 1075564084 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 209586  bytes 19408651 (18.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.5.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::842b:ecff:fedd:8d5f  prefixlen 64  scopeid 0x20<link>
        ether 86:2b:ec:dd:8d:5f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 35 overruns 0  carrier 0  collisions 0

node2节点分配到的ip为172.17.10.1

[root@node2 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.10.1  netmask 255.255.255.0  broadcast 172.17.10.255
        ether 02:42:d9:e8:0c:66  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.245.210  netmask 255.255.255.0  broadcast 192.168.245.255
        inet6 fe80::20c:29ff:fe9f:6979  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:9f:69:79  txqueuelen 1000  (Ethernet)
        RX packets 804058  bytes 1074727380 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 226147  bytes 19694949 (18.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 172.17.10.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::2c39:49ff:fe70:9f2  prefixlen 64  scopeid 0x20<link>
        ether 2e:39:49:70:09:f2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 36 overruns 0  carrier 0  collisions 0

测试两个节点互相可以ping通docker0

[root@node2 ~]# ping 172.17.5.1
PING 172.17.5.1 (172.17.5.1) 56(84) bytes of data.
64 bytes from 172.17.5.1: icmp_seq=1 ttl=64 time=1.53 ms
64 bytes from 172.17.5.1: icmp_seq=2 ttl=64 time=0.692 ms
64 bytes from 172.17.5.1: icmp_seq=3 ttl=64 time=3.33 ms
^C
--- 172.17.5.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.692/1.854/3.332/1.100 ms

部署k8s其他组件

制作证书

创建一个存放证书的路径/opt/kubernetes/k8s-cert

[root@localhost k8s]# mkdir /opt/kubernetes/{
     cfg,bin,ssl} -p
[root@localhost k8s]# mkdir k8s-cert
[root@localhost k8s]# cd k8s-cert/

生成k8s组件所需要的证书,这里全部写在脚本里了

[root@localhost k8s-cert]# vim k8s-cert.sh 

[root@localhost k8s-cert]# bash k8s-cert.sh
[root@localhost k8s-cert]# ll *.pem
-rw------- 1 root root 1679 9  29 15:19 admin-key.pem  
-rw-r--r-- 1 root root 1399 9  29 15:19 admin.pem   //kubectl的证书,admin权限,具有访问kubernetes所有api的权限。
-rw------- 1 root root 1675 9  29 15:19 ca-key.pem
-rw-r--r-- 1 root root 1359 9  29 15:19 ca.pem    //专门用来签发其他组件证书的根证书
-rw------- 1 root root 1675 9  29 15:19 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 9  29 15:19 kube-proxy.pem    //kube-proxy的证书
-rw------- 1 root root 1675 9  29 15:19 server-key.pem
-rw-r--r-- 1 root root 1643 9  29 15:19 server.pem   //apiserver的证书

scheduler和controller-manager这里是用非安全端口来和apiserver通信的,所以不需要证书
kubelet的证书是由controller-manager签发的,会自动生成,所以这里也不需要

[root@localhost k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/

master上部署apiserver

安装k8s,解压之后就包含了master上所需要的组件

[root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
[root@localhost k8s]# cd /root/k8s/kubernetes/server/bin
[root@localhost bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

生成api-server所需要的TLS Token(apiserver要开启token认证)

[root@localhost ~]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
49973b42a12aa4d513019a15e3661e2d

将token保存到csv文件,apiserver启动时会调用

[root@localhost bin]# cd /root/k8s/
[root@localhost k8s]# vim /opt/kubernetes/cfg/token.csv

49973b42a12aa4d513019a15e3661e2d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
Token,用户名,用户的UID,用户组

启动apiserver

[root@localhost k8s]# bash apiserver.sh 192.168.245.211 https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@localhost k8s]# 

查看进程

[root@localhost k8s]# ps aux | grep kube
root      73461 30.8 16.7 397888 311924 ?       Ssl  15:29   0:08 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379 --bind-address=192.168.245.211 --secure-port=6443 --advertise-address=192.168.245.211 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      73481  0.0  0.0 112724   988 pts/1    S+   15:30   0:00 grep --color=auto kube

k8s通过kube-apiserver这个进程提供服务,该进程运行在单个k8s-master节点上。默认有两个端口
本地端口8080用于接收HTTP请求,非认证或授权的HTTP请求通过该端口访问API Server
安全端口6443用于接收HTTPS请求,用于基于Tocken文件或客户端证书及HTTP Base的认证,默认IP地址为非本地(Non-Localhost)网络端口,通过启动参数“–bind-address”设置该值;用于基于策略的授权,默认不启动HTTPS安全访问控制

[root@localhost k8s]# netstat -ntap | grep 6443
tcp        0      0 192.168.245.211:6443    0.0.0.0:*               LISTEN      73461/kube-apiserve 
tcp        0      0 192.168.245.211:6443    192.168.245.211:52656   ESTABLISHED 73461/kube-apiserve 
tcp        0      0 192.168.245.211:52656   192.168.245.211:6443    ESTABLISHED 73461/kube-apiserve 

[root@localhost k8s]# netstat -ntap | grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      73461/kube-apiserve

查看版本信息(必须保证apiserver启动正常,不然无法查询到server的版本信息)

[root@master01 ~]# kubectl version
Client Version: version.Info{
     Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:57:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{
     Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:46:57Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Client Version:kubectl版本信息
Server Version:k8s版本

启动scheduler

[root@localhost k8s]# bash scheduler.sh 127.0.0.1
[root@localhost k8s]# ps aux | grep ku
postfix   73082  0.0  0.1  91732  1988 ?        S    15:09   0:00 pickup -l -t unix -u
root      73461  5.4 16.7 397888 311924 ?       Ssl  15:29   0:15 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.245.211:2379,https://192.168.245.209:2379,https://192.168.245.210:2379 --bind-address=192.168.245.211 --secure-port=6443 --advertise-address=192.168.245.211 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem
root      73649  2.3  1.0  46128 19068 ?        Ssl  15:34   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

启动controller-manager

[root@localhost k8s]# bash controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

查看集群健康状态

[root@localhost k8s]# /opt/kubernetes/bin/kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {
     "health":"true"}   
etcd-1               Healthy   {
     "health":"true"}   
etcd-0               Healthy   {
     "health":"true"}   

部署kubelet

将命令分发到node节点:

[root@localhost bin]# pwd
/root/k8s/kubernetes/server/bin
[root@localhost bin]# scp kubelet kube-proxy root@192.168.245.209:/opt/kubernetes/bin/
root@192.168.245.209's password: 
kubelet                                                                                                 100%  168MB 121.2MB/s   00:01    
kube-proxy                                                                                              100%   48MB 107.3MB/s   00:00    
[root@localhost bin]# scp kubelet kube-proxy root@192.168.245.210:/opt/kubernetes/bin/
root@192.168.245.210's password: 
kubelet                                                                                                 100%  168MB 130.4MB/s   00:01    
kube-proxy                                                                                              100%   48MB 138.5MB/s   00:00    

新建一个kubeconfig目录存放kubelet的配置文件

[root@localhost k8s]# mkdir kubeconfig
[root@localhost k8s]# cd kubeconfig/

将写好的kubeconfig.sh拷贝进去并改名

[root@localhost kubeconfig]# ls
kubeconfig.sh
[root@localhost kubeconfig]# mv kubeconfig.sh kubeconfig

修改参数,指定token

[root@localhost kubeconfig]# vim kubeconfig 

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=49973b42a12aa4d513019a15e3661e2d \

[root@localhost kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/

配置kubelet,会生成bootstrap.kubeconfig和kube-proxy.kubeconfig配置文件

kubelet通过bootstrap.kubeconfig知道apiserver的地址,拿着token去找apiserver认证身份,然后apiserver给它授权,绑定角色,这时候它就可以向apiserver申请证书认证了,apiserver会自动给它生成证书,实际上是controller-manager签发的,然后kubelet以后就拿着这个证书和apiserver通信了(具体步骤在后面)

[root@localhost kubeconfig]# bash kubeconfig 192.168.245.211 /root/k8s/k8s-cert/
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@localhost kubeconfig]# ls
bootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig

把这2个文件分发到集群2个node节点上

[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig 
[root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig 

[root@localhost kubeconfig]#

kubelet采用TLS Bootstrapping 机制,自动完成到kube-apiserver的注册,在node节点量较大或者后期自动扩容时非常有用。
Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,
当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

kubelet启动时向kube-apiserver发送TLS bootstrapping请求,需要先将bootstrap token文件中的kubelet-bootstrap用户赋予ClusterRole system:node-bootstrapper 角色(通过kubectl get clusterroles可查询)
然后kubelet才有权限创建认证请求,通过创建ClusterRoleBinding可实现;

# --user=kubelet-bootstrap 指定用户名,这里即文件/opt/kubernetes/cfg/token.csv中指定的用户名

同时也需要写入kubeconfig文件/opt/kubernetes/cfg/bootstrap.kubeconfig

TLS bootstrap用户授权:

[root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

node01节点启动kubelet服务

[root@node1 bin]# bash kubelet.sh 192.168.245.209
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node1 bin]# ps aux | grep kube
root       9112 17.3  2.5 473792 46952 ?        Ssl  17:39   0:01 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.245.209 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root       9127  0.0  0.0 112724   988 pts/3    S+   17:39   0:00 grep --color=auto kube

kubelet 首次启动向 kube-apiserver 发送证书签名请求,必须由 kubernetes 系统允许通过后,才会将该 node 加入到集群。

查看未授权的集群认证请求,处于”Pending”状态

[root@localhost ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-5zTiKXjy4yGAbkEHJOlepjY4o7_PWA14HX2x0cu4_R4   77s   kubelet-bootstrap   Pending

授权node1加入群集

[root@localhost ~]# kubectl certificate approve node-csr-5zTiKXjy4yGAbkEHJOlepjY4o7_PWA14HX2x0cu4_R4    
certificatesigningrequest.certificates.k8s.io/node-csr-5zTiKXjy4yGAbkEHJOlepjY4o7_PWA14HX2x0cu4_R4 approved

在node01节点操作,启动proxy服务

[root@node1 ~]# bash proxy.sh 192.168.245.209
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

把/opt/kubernetes/下的所有文件拷贝到node2

[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.245.210:/opt/

把服务启动文件也拷贝到node2

[root@node1 ~]# scp /usr/lib/systemd/system/{
     kubelet,kube-proxy}.service root@192.168.245.210:/usr/lib/systemd/system/
root@192.168.245.210's password: 
kubelet.service                                                                                                            100%  264   129.3KB/s   00:00    
kube-proxy.service                                                                                                         100%  231   260.7KB/s   00:00    

部署node2上的kubelet和kube-proxy服务

[root@node2 ~]# cd /opt/kubernetes/ssl/
[root@node2 ssl]# rm -rf *
[root@node2 ssl]# cd ../cfg/
[root@node2 cfg]# vim kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.245.210 \     //修改为自己的节点ip


[root@node2 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.245.210     //修改为自己的节点ip

[root@node2 cfg]# vim kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \ 
--hostname-override=192.168.245.210 \    //修改为自己的节点ip

启动kubelet和kube-proxy并设置开机自启

[root@node2 cfg]# systemctl start kubelet.service
[root@node2 cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node2 cfg]# systemctl start kube-proxy.service
[root@node2 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

到master节点上发现未授权的node2请求,授权node2加入集群

[root@localhost ~]# kubectl certificate approve node-csr-P3996HQxx_2PLeo9bxBu7TVPcWgbAWqla5yj8Wa_5ks
certificatesigningrequest.certificates.k8s.io/node-csr-P3996HQxx_2PLeo9bxBu7TVPcWgbAWqla5yj8Wa_5ks approved

通过csr请求,状态变更为”Approved, Issued”

[root@localhost ~]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-5zTiKXjy4yGAbkEHJOlepjY4o7_PWA14HX2x0cu4_R4   27m    kubelet-bootstrap   Approved,Issued
node-csr-P3996HQxx_2PLeo9bxBu7TVPcWgbAWqla5yj8Wa_5ks   111s   kubelet-bootstrap   Approved,Issued

这时可以查看集群节点已ready

[root@localhost ~]# kubectl get nodes    //kubectl get node也行
NAME              STATUS   ROLES    AGE   VERSION
192.168.245.209   Ready    <none>   14m   v1.12.3
192.168.245.210   Ready    <none>   34s   v1.12.3

部署master02

从master01上拷贝证书文件和各服务配置文件到master02

[root@master01 ~]# scp -r /opt/kubernetes/ root@192.168.245.206:/opt

从master01拷贝服务启动文件到master02

[root@master01 ~]# scp /usr/lib/systemd/system/{
     kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.245.206:/usr/lib/systemd/system/
root@192.168.245.206's password: 
kube-apiserver.service                                                                                                                                     100%  282   266.8KB/s   00:00    
kube-controller-manager.service                                                                                                                            100%  317   404.2KB/s   00:00    
kube-scheduler.service                                                                                                                                     100%  281   564.1KB/s   00:00    

apiserver要改成自己的ip地址

[root@master02 cfg]# vim kube-apiserver 
--bind-address=192.168.245.206 \
--advertise-address=192.168.245.206 \

拷贝etcd证书和etcd通讯

[root@master01 ~]# scp -r /opt/etcd/ root@192.168.245.206:/opt/
root@192.168.245.206's password: 
etcd                                                                                                                                                       100%  523   146.4KB/s   00:00    
etcd                                                                                                                                                       100%   18MB 114.7MB/s   00:00    
etcdctl                                                                                                                                                    100%   15MB 110.3MB/s   00:00    
ca-key.pem                                                                                                                                                 100% 1679     1.1MB/s   00:00    
ca.pem                                                                                                                                                     100% 1265   344.2KB/s   00:00    
server-key.pem                                                                                                                                             100% 1679   754.6KB/s   00:00    
server.pem                                                                                                                                                 100% 1338   678.1KB/s   00:00    

master02上直接启动各项服务并都设置开机自启

[root@master02 cfg]# systemctl start kube-apiserver.service
[root@master02 cfg]#  systemctl enable kube-apiserver.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master02 cfg]# systemctl start kube-controller-manager.service
[root@master02 cfg]# systemctl enable kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master02 cfg]# systemctl start kube-scheduler.service
[root@master02 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

设置环境变量可直接使用命令

[root@master02 cfg]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
[root@master02 cfg]# source /etc/profile

查看节点情况

[root@master02 cfg]# kubectl get node
NAME              STATUS   ROLES    AGE    VERSION
192.168.245.209   Ready    <none>   151m   v1.12.3
192.168.245.210   Ready    <none>   137m   v1.12.3

你可能感兴趣的:(docker+k8s,kubernetes,k8s,k8s二进制部署,k8s安装)