kubernetes安装部署-day03

四、kubernetes集群部署之node节点添加:

所有node节点都要部署kubelet和kuble-proxy服务
各node节点安装基础命令,我node用的ubuntu所以相应的命令是apt-get install ipvsadm ipset conntrack

4.1、准备工作

4.1.1、二进制包准备

[root@k8s-master1 src]# scp kubernetes/server/bin/kube-proxy  kubernetes/server/bin/kubelet  192.168.100.107:/opt/kubernetes/bin/
[root@k8s-master1 src]# scp kubernetes/server/bin/kube-proxy kubernetes/server/bin/kubelet  192.168.100.108:/opt/kubernetes/bin/

4.1.2、角色绑定

[root@k8s-master1 src]#  kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

4.1.3、创建 kubelet bootstrapping kubeconfig 文件并设置集群参数

[root@k8s-master1 src]# kubectl config set-cluster kubernetes \
   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
    --server=https://100.114.29.54:6443 \
    --kubeconfig=bootstrap.kubeconfig
Cluster "kubernetes" set.

4.1.4、设置客户端认证参数

 [root@k8s-master1 src]#  kubectl config set-credentials kubelet-bootstrap \
   --token=7833079f23e0e9fd321d4780b92d6826 \
   --kubeconfig=bootstrap.kubeconfig 

token值是之前生成的,找不到可以去文件/opt/kubernetes/ssl/bootstrap-token.csv中查看。

4.1.5、设置上下文

[root@k8s-master1 src]# kubectl config set-context default \
   --cluster=kubernetes \
   --user=kubelet-bootstrap \
   --kubeconfig=bootstrap.kubeconfig

4.1.6、选择默认上下文

[root@k8s-master1 src]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
Switched to context "default"

4.1.7、把生成的文件复制到node节点上去

[root@k8s-master1 src]# scp bootstrap.kubeconfig  192.168.100.108:/opt/kubernetes/cfg/ #自动生成文件
[root@k8s-master1 src]# scp bootstrap.kubeconfig  192.168.100.109:/opt/kubernetes/cfg/

4.2node节点部署

创建相应的目录mkdir -p /etc/cni/net.d;mkdir /var/lib/kubelet

 [root@k8s-node1 ~]# cat  /etc/cni/net.d/10-default.conf
{
        "name": "flannel",
        "type": "flannel",
        "delegate": {
            "bridge": "docker0",
            "isDefaultGateway": true,
            "mtu": 1400
        }
}

4.2.1每个node节点创建kubelet服务

[root@k8s-node2 ~]#  vim /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \
  --address=192.168.100.108 \
  --hostname-override=192.168.100.108 \
  --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \
  --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
  --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
  --cert-dir=/opt/kubernetes/ssl \
  --network-plugin=cni \
  --cni-conf-dir=/etc/cni/net.d \
  --cni-bin-dir=/opt/kubernetes/bin/cni \
  --cluster-dns=10.1.0.1 \
  --cluster-domain=cluster.local. \
  --hairpin-mode hairpin-veth \
  --allow-privileged=true \
  --fail-swap-on=false \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log
Restart=on-failure
RestartSec=5

启动服务

root@k8s-node1:/opt/kubernetes/bin# systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet && systemctl  status kubelet
The unit files have no installation config (WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and DefaultInstance for template units).
This means they are not meant to be enabled using systemctl.
Possible reasons for having this kind of units are:
1) A unit may be statically enabled by being symlinked from another unit's
   .wants/ or .requires/ directory.
2) A unit's purpose may be to act as a helper for some other unit which has
   a requirement dependency on it.
3) A unit may be started when needed via activation (socket, path, timer,
   D-Bus, udev, scripted systemctl call, ...).
4) In case of template units, the unit is meant to be enabled with some
   instance name specified.
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/lib/systemd/system/kubelet.service; static; vendor preset: enabled)
   Active: active (running) since Wed 2019-05-29 13:57:02 CST; 38ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 26736 (kubelet)
    Tasks: 1 (limit: 2323)
   CGroup: /system.slice/kubelet.service
           └─26736 /opt/kubernetes/bin/kubelet --address=10.51.67.209 --hostname-override=10.51.67.209 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64

May 29 13:57:02 k8s-node1.example.com systemd[1]: Started Kubernetes Kubelet.
lines 1-10/10 (END)

master查看csr请求:

[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-tBQnts-gAMLQSCf9L4RsqdKV94ZR3A_jfW1uupsHDkE   19s       kubelet-bootstrap   Pending

master批准TLS请求:
执行完毕后,查看节点状态已经是Ready的状态了

[root@k8s-master1 ~]# kubectl get csr|grep 'Pending' |awk 'NR>0{print $1}'|xargs kubectl certificate approve
certificatesigningrequest.certificates.k8s.io/node-csr-tBQnts-gAMLQSCf9L4RsqdKV94ZR3A_jfW1uupsHDkE approved
[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-tBQnts-gAMLQSCf9L4RsqdKV94ZR3A_jfW1uupsHDkE   2m        kubelet-bootstrap   Approved,Issued

查node节点的具体信息

[root@k8s-master1 ~]# kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
10.51.67.209   Ready         3m        v1.11.1
[root@k8s-master1 ~]# kubectl get nodes -o wide
NAME           STATUS    ROLES     AGE       VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
10.51.67.209   Ready         9m        v1.11.1   10.51.67.209           Ubuntu 18.04.2 LTS   4.15.0-45-generic   docker://18.3.1

4.2.2、安装kube-proxy

master上创建kube-proxy的证书

[root@k8s-master1 src]#  vim kube-proxy-csr.json

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

生成证书

[root@k8s-master1 kube-proxy]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem    -ca-key=/opt/kubernetes/ssl/ca-key.pem    -config=/opt/kubernetes/ssl/ca-config.json    -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/05/29 17:33:45 [INFO] generate received request
2019/05/29 17:33:45 [INFO] received CSR
2019/05/29 17:33:45 [INFO] generating key: rsa-2048
2019/05/29 17:33:45 [INFO] encoded CSR
2019/05/29 17:33:45 [INFO] signed certificate with serial number 360163199981782941472834263644683989131451600588
2019/05/29 17:33:45 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
You have new mail in /var/spool/mail/root
[root@k8s-master1 kube-proxy]# ll
total 16
-rw-r--r-- 1 root root 1009 May 29 17:33 kube-proxy.csr
-rw-r--r-- 1 root root  230 May 29 17:32 kube-proxy-csr.json
-rw------- 1 root root 1675 May 29 17:33 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 May 29 17:33 kube-proxy.pem
[root@k8s-master1 kube-proxy]# 

复制证书到各个node节点

[root@k8s-master1 src]# cp kube-proxy*.pem /opt/kubernetes/ssl/
[root@k8s-master1 src]# bash /root/ssh.sh

创建kube-proxy配置文件

[root@k8s-master1 src]# kubectl config set-cluster kubernetes \
    --certificate-authority=/opt/kubernetes/ssl/ca.pem \
   --embed-certs=true \
   --server=https://192.168.100.112:6443 \
   --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

[root@k8s-master1 src]# kubectl config set-credentials kube-proxy \
    --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
    --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
   --embed-certs=true \
   --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

[root@k8s-master1 src]# kubectl config set-context default \
    --cluster=kubernetes \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 src]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".

分发kubeconfig配置文件

[root@k8s-master1 src]# scp kube-proxy.kubeconfig  192.168.100.108:/opt/kubernetes/cfg/
[root@k8s-master1 src]# scp kube-proxy.kubeconfig  192.168.100.109:/opt/kubernetes/cfg/

创建kube-proxy配置

[root@k8s-node1 ~]#  mkdir /var/lib/kube-proxy
[root@k8s-node2 ~]#  mkdir /var/lib/kube-proxy


vim /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \
  --bind-address=192.168.100.109 \
  --hostname-override=192.168.100.109 \
  --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \
   --masquerade-all \
  --feature-gates=SupportIPVSProxyMode=true \
  --proxy-mode=ipvs \
  --ipvs-min-sync-period=5s \
  --ipvs-sync-period=5s \
  --ipvs-scheduler=rr \
  --logtostderr=true \
  --v=2 \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log

Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

启动并验证服务

[root@k8s-node1 ~]#  systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy && systemctl status kube-proxy
[root@k8s-node2 ~]#  systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy && systemctl status kube-proxy

root@k8s-node1:/opt/kubernetes/ssl# systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy && systemctl status kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/lib/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2019-05-31 10:13:17 CST; 25ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 26016 (kube-proxy)
    Tasks: 1 (limit: 2323)
   CGroup: /system.slice/kube-proxy.service
           └─26016 /opt/kubernetes/bin/kube-proxy --bind-address=10.51.67.209 --hostname-override=10.51.67.209 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig

May 31 10:13:17 k8s-node1.example.com systemd[1]: Started Kubernetes Kube-Proxy Server.
lines 1-10/10 (END)

安装ipvsadm ipset conntrack
apt-get install ipvsadm ipset conntrack
root@k8s-node1:/opt/kubernetes/cfg# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr
  -> 192.168.100.101:6443         Masq    1      0          0         
  -> 192.168.100.102:6443         Masq    1      0          0 

你可能感兴趣的:(kubernetes安装部署-day03)