kubernetes1.16 K8S二进制搭建集群,一主多从

目录

    • 一.环境说明
    • 二. 借助cfssl、 cfssljson 、cfssl-certinfo工具生成kubernetes集群的CA证书
    • 三.下载kubernetes二进制文件
    • 四. 部署master 节点组件
      • 4.1将二进制文件复制到/usr/bin/目录下,并添加可执行权限;
      • 4.2编写master节点各个组件的配置文件
      • 4.3编写启动服务的文件,并将复制到/usr/lib/systemd/system/目录下:
      • 4.4启动master节点上的组件,并查看状态,三个为running,则表示master节点搭建成功;
    • 五. 搭建node节点
      • 5.1证书拷贝
      • 5.2拷贝node节点的二进制文件,并添加可执行权限
      • 5.3编写node节点要的配置文件
      • 5.4二进制安装docker,下载docker的二进制文件,解压至/usr/bin/目录下
      • 5.5 启动node节点的组件docker、kubelet、kube-proxy
    • 六.验证集群
      • 6.1master节点授权
    • 七.为kubernetes集群添加网络插件
      • 7.1 在node节点添加cni插件,将可执行文件解压到/opt/cni/bin/目录下
      • 7.2安装flannel插件
      • 7.3 再次验证集群状态,node状态为Ready,表示集群搭建成功

一.环境说明

操作系统:centos7
kubernetes:16.0
docker:18.06

主机名 IP地址 类型
K8S-MASTER-ETCD01 192.168.1.121 masters
K8S-MASTER-ETCD01 192.168.1.122 masters
K8S-MASTER-ETCD01 192.168.1.123 masters
K8S-NODE01 192.168.1.124 nodes
K8S-NODE01 192.168.1.125 nodes
JENKINS- NGINX01 192.168.1.181 balances
GITLAB-NGINX01 192.168.1.182 balances
vip 192.168.1.180 vips

以下操作若无特别说明,默认在K8S-MASTER-ETCD01上执行
etcd集群搭建:https://blog.csdn.net/qq_36783142/article/details/103449670

二. 借助cfssl、 cfssljson 、cfssl-certinfo工具生成kubernetes集群的CA证书

## 生成ca证书的配置文件,json格式;
[root@K8S-MASTER-ETCD01 k8s]# ls /etc/ssl/k8s/
ca-config.json  ca-csr.json  kube-proxy-csr.json  server-csr.json
[root@K8S-MASTER-ETCD01 k8s]# mkdir -p /etc/ssl/k8s && cd /etc/ssl/k8s/ 
[root@K8S-MASTER-ETCD01 k8s]# cat /etc/ssl/k8s/ca-config.json 
{
     
  "signing": {
     
    "default": {
     
      "expiry": "876000h"
    },
    "profiles": {
     
      "kubernetes": {
     
        "expiry": "876000h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
[root@K8S-MASTER-ETCD01 k8s]# cat /etc/ssl/k8s/ca-csr.json 
{
     
    "CN": "kubernetes",
    "key": {
     
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
     
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
[root@K8S-MASTER-ETCD01 k8s]# cat /etc/ssl/k8s/kube-proxy-csr.json
{
     
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
     
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
     
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
[root@K8S-MASTER-ETCD01 k8s]# cat /etc/ssl/k8s/server-csr.json
{
     
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local",
      "192.168.1.121",
      "192.168.1.122",
      "192.168.1.123",
      "192.168.1.124",
      "192.168.1.125",
      "192.168.1.126",
      "192.168.1.180",
      "192.168.1.181",
      "192.168.1.182",
      "192.168.1.183",
      "192.168.1.184",
      "192.168.1.185"
    ],
    "key": {
     
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
     
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}


[root@K8S-MASTER-ETCD01 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

2019/12/09 15:24:22 [INFO] generating a new CA key and certificate from CSR
2019/12/09 15:24:22 [INFO] generate received request
2019/12/09 15:24:22 [INFO] received CSR
2019/12/09 15:24:22 [INFO] generating key: rsa-2048
2019/12/09 15:24:23 [INFO] encoded CSR
2019/12/09 15:24:23 [INFO] signed certificate with serial number 541627502072534331671872144850539279789924132997

[root@K8S-MASTER-ETCD01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/12/09 15:24:23 [INFO] generate received request
2019/12/09 15:24:23 [INFO] received CSR
2019/12/09 15:24:23 [INFO] generating key: rsa-2048
2019/12/09 15:24:23 [INFO] encoded CSR
2019/12/09 15:24:23 [INFO] signed certificate with serial number 25320982012483409022177577986851204788419067984
2019/12/09 15:24:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
## 这是生成kube-proxy组件的证书,node节点通过该证书与apiserver通信;
[root@K8S-MASTER-ETCD01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/12/09 15:24:23 [INFO] generate received request
2019/12/09 15:24:23 [INFO] received CSR
2019/12/09 15:24:23 [INFO] generating key: rsa-2048
2019/12/09 15:24:24 [INFO] encoded CSR
2019/12/09 15:24:24 [INFO] signed certificate with serial number 47873770289254085633668911217080665148648210714
2019/12/09 15:24:24 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements")
[root@K8S-MASTER-ETCD01 k8s]# ls /etc/ssl/k8s/
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem  server.csr  server-csr.json  server-key.pem  server.pem

三.下载kubernetes二进制文件

链接地址:https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz
提取kube-apiserver、 kube-controller-manager、 kube-scheduler、 kubectl 二进制文件

四. 部署master 节点组件

4.1将二进制文件复制到/usr/bin/目录下,并添加可执行权限;

[root@K8S-MASTER-ETCD01 ~]# ls /usr/bin/kube*
/usr/bin/kube-apiserver  /usr/bin/kube-controller-manager  /usr/bin/kubectl  /usr/bin/kube-scheduler
[root@K8S-MASTER-ETCD01 ~]# chmod +x /usr/bin/kube*

4.2编写master节点各个组件的配置文件

[root@K8S-MASTER-ETCD01 ~]# ansible all  -m shell -a "mkdir -p /etc/kubernetes /etc/ssl/k8s/ /var/log/kubernetes"
[root@K8S-MASTER-ETCD01 kubernetes]# cd /etc/kubernetes/
## token值可以另外生成,但是node节点的bootstrap.kubeconfig文件token与该token必须一致
[root@K8S-MASTER-ETCD01 master]# cat /etc/kubernetes/token.csv 
b4cb89950a3e47b704a812af4d1fb2d9,kubelet-bootstrap,10001,"system:node-bootstrapper" 

[root@K8S-MASTER-ETCD01 kubernetes]# cat /etc/kubernetes/kube-controller-manager.conf 
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--leader-elect=true \
--master=127.0.0.1:8080 \
--address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/etc/ssl/k8s/ca.pem \
--cluster-signing-key-file=/etc/ssl/k8s/ca-key.pem  \
--root-ca-file=/etc/ssl/k8s/ca.pem \
--service-account-private-key-file=/etc/ssl/k8s/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
[root@K8S-MASTER-ETCD01 kubernetes]# cat /etc/kubernetes/kube-scheduler.conf 
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"
[root@K8S-MASTER-ETCD01 kubernetes]# cat /etc/kubernetes/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--etcd-servers=https://192.168.1.121:2379,https://192.168.1.122:2379,https://192.168.1.123:2379 \
--bind-address=192.168.1.121 \
--secure-port=6443 \
--advertise-address=192.168.1.121 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--service-node-port-range=30000-32767 \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/token.csv \
--kubelet-client-certificate=/etc/ssl/k8s/server.pem \
--kubelet-client-key=/etc/ssl/k8s/server-key.pem \
--tls-cert-file=/etc/ssl/k8s/server.pem  \
--tls-private-key-file=/etc/ssl/k8s/server-key.pem \
--client-ca-file=/etc/ssl/k8s/ca.pem \
--service-account-key-file=/etc/ssl/k8s/ca-key.pem \
--etcd-cafile=/etc/ssl/etcd/ca.pem \
--etcd-certfile=/etc/ssl/etcd/server.pem \
--etcd-keyfile=/etc/ssl/etcd/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kubernetes/k8s-audit.log"

4.3编写启动服务的文件,并将复制到/usr/lib/systemd/system/目录下:

[root@K8S-MASTER-ETCD01 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

[root@K8S-MASTER-ETCD01 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

[root@K8S-MASTER-ETCD01 ~]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

4.4启动master节点上的组件,并查看状态,三个为running,则表示master节点搭建成功;

[root@K8S-MASTER-ETCD01 ~]#  systemctl start kube-controller-manager kube-apiserver kube-scheduler 
[root@K8S-MASTER-ETCD01 ~]#  systemctl enable kube-controller-manager kube-apiserver kube-scheduler 
[root@K8S-MASTER-ETCD01 ~]#  systemctl status kube-controller-manager kube-apiserver kube-scheduler 

五. 搭建node节点

获取node节点的组件,docker、cni网络插件、kubelet、kube-proxy

[root@K8S-NODE01 ~]# mkdir -p /etc/kubernetes/  /etc/ssl/k8s/ /var/log/kubernetes/ /opt/cni/bin/ /etc/cni/net.d/

5.1证书拷贝

[root@K8S-NODE01 ~]# scp 192.168.1.121:/etc/ssl/k8s/kube-proxy.pem /etc/ssl/k8s/
[root@K8S-NODE01 ~]# scp 192.168.1.121:/etc/ssl/k8s/kube-key.pem /etc/ssl/k8s/
[root@K8S-NODE01 ~]# scp 192.168.1.121:/etc/ssl/k8s/ca.pem /etc/ssl/k8s/
 

5.2拷贝node节点的二进制文件,并添加可执行权限


[root@K8S-NODE01 ~]# ls /usr/bin/kube*
/usr/bin/kubelet  /usr/bin/kube-proxy
[root@K8S-NODE01 ~]# chmod +x  /usr/bin/kube*
/usr/bin/kubelet  /usr/bin/kube-proxy

5.3编写node节点要的配置文件

## 该文件下的token值要与master上的一致
[root@K8S-NODE01 ~]# cat /etc/kubernetes/bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /etc/ssl/k8s/ca.pem
    server: https://192.168.1.121:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {
     }
users:
- name: kubelet-bootstrap
  user:
    token: b4cb89950a3e47b704a812af4d1fb2d9
    
[root@K8S-NODE01 ~]# cat /etc/kubernetes/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--hostname-override=k8s-node01 \
--network-plugin=cni \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet-config.yml \
--cert-dir=/etc/ssl/k8s/ \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.0"

[root@K8S-NODE01 ~]# cat /etc/kubernetes/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/ssl/k8s/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

[root@K8S-NODE01 ~]# cat /etc/kubernetes/kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /etc/ssl/k8s/ca.pem
    server: https://192.168.1.121:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
preferences: {
     }
users:
- name: default-auth
  user:
    client-certificate: /etc/ssl/k8s/kubelet-client-current.pem
    client-key: /etc/ssl/k8s/kubelet-client-current.pem

[root@K8S-NODE01 ~]# cat /etc/kubernetes/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--config=/etc/kubernetes/kube-proxy-config.yml"

[root@K8S-NODE01 ~]# cat /etc/kubernetes/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: k8s-node01
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true
  
[root@K8S-NODE01 ~]# cat /etc/kubernetes/kube-proxy.kubeconfig

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /etc/ssl/k8s/ca.pem
    server: https://192.168.1.121:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {
     }
users:
- name: kube-proxy
  user:
    client-certificate: /etc/ssl/k8s/kube-proxy.pem
    client-key: /etc/ssl/k8s/kube-proxy-key.pem
[root@K8S-NODE01 ~]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service containerd.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

[root@K8S-NODE01 ~]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/kube-proxy.conf
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@K8S-NODE01 ~]# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Before=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.4二进制安装docker,下载docker的二进制文件,解压至/usr/bin/目录下

[root@K8S-NODE01 ~]# wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz
[root@K8S-NODE01 ~]# ls docker-18.09.6.tgz 
docker-18.09.6.tgz
[root@K8S-NODE01 ~]# tar -zxvf docker-18.09.6.tgz 
[root@K8S-NODE01 ~]# ls docker
containerd  containerd-shim  ctr  docker  dockerd  docker-init  docker-proxy  runc
[root@K8S-NODE01 ~]# cp docker/* /usr/bin/

5.5 启动node节点的组件docker、kubelet、kube-proxy

[root@K8S-NODE01 ~]# systemctl start docker kubelet kube-proxy
[root@K8S-NODE01 ~]# systemctl enable docker kubelet kube-proxy
[root@K8S-NODE01 ~]# systemctl status docker kubelet kube-proxy

六.验证集群

6.1master节点授权

[root@K8S-MASTER-ETCD01 ~]# kubectl get csr 
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-7WZgooL6IWOVfc7boOBCOFYGuEPxr7opewRHaIAUwoc   7s      kubelet-bootstrap   Pending
[root@K8S-MASTER-ETCD01 ~]# kubectl certificate approve  node-csr-7WZgooL6IWOVfc7boOBCOFYGuEPxr7opewRHaIAUwoc
[root@K8S-MASTER-ETCD01 ~]# kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-7WZgooL6IWOVfc7boOBCOFYGuEPxr7opewRHaIAUwoc   61s     kubelet-bootstrap   Approved,Issued

[root@K8S-MASTER-ETCD01 ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-node01   NotReady      <none>   15s   v1.16.2

七.为kubernetes集群添加网络插件

7.1 在node节点添加cni插件,将可执行文件解压到/opt/cni/bin/目录下

[root@K8S-NODE01 ~]# ls cni-plugins-linux-amd64-v0.8.2.tgz 
cni-plugins-linux-amd64-v0.8.2.tgz
[root@K8S-NODE01 ~]# tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
[root@K8S-NODE01 ~]# ls /opt/cni/bin/
bandwidth  bridge  dhcp  firewall  flannel  host-device  host-local  ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan

7.2安装flannel插件


[root@K8S-MASTER-ETCD01 ~]# cat kube-flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
     
      "cniVersion": "0.2.0",
      "name": "cbr0",
      "plugins": [
        {
     
          "type": "flannel",
          "delegate": {
     
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
     
          "type": "portmap",
          "capabilities": {
     
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
     
      "Network": "10.244.0.0/16",
      "Backend": {
     
        "Type": "vxlan"
      }
    }
---
**apiVersion: apps/v1
kind: DaemonSet
**
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-amd64 
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64 
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
[root@K8S-MASTER-ETCD01 ~]# kubectl apply -f kube-flannel.yaml
[root@K8S-MASTER-ETCD01 ~]# kubectl get pod -n kube-system
NAME                          READY   STATUS     RESTARTS   AGE
kube-flannel-ds-amd64-skqck   0/1     Init:0/1   0          79s
[root@K8S-MASTER-ETCD01 ~]# kubectl describe pod kube-flannel-ds-amd64-skqck -n kube-system
Name:         kube-flannel-ds-amd64-skqck
Namespace:    kube-system
Priority:     0
Node:         k8s-node01/192.168.1.124
Start Time:   Mon, 09 Dec 2019 22:04:34 +0800
Labels:       app=flannel
              controller-revision-hash=67f65bfbc7
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Pending
IP:           192.168.1.124
IPs:
  IP:           192.168.1.124
Controlled By:  DaemonSet/kube-flannel-ds-amd64
Init Containers:
  install-cni:
    Container ID:  
    Image:         quay.io/coreos/flannel:v0.11.0-amd64
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-plzmm (ro)
Containers:
  kube-flannel:
    Container ID:  
    Image:         quay.io/coreos/flannel:v0.11.0-amd64
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:       kube-flannel-ds-amd64-skqck (v1:metadata.name)
      POD_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-plzmm (ro)
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  flannel-token-plzmm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  flannel-token-plzmm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type    Reason     Age        From                 Message
  ----    ------     ----       ----                 -------
  Normal  Scheduled  <unknown>  default-scheduler    Successfully assigned kube-system/kube-flannel-ds-amd64-skqck to k8s-node01
  Normal  Pulling    58s        kubelet, k8s-node01  Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"

7.3 再次验证集群状态,node状态为Ready,表示集群搭建成功

在这里插入代码片

[root@K8S-MASTER-ETCD01 ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-node01   Ready    <none>   92m   v1.16.2

你可能感兴趣的:(容器,一主多从,二进制搭建K8S集群,docker二进制安装)