2020-07-14 搭建一个完整的Kubernetes集群(下)

1. 单master集群部署Node组件

Node组件为:kubelet 、Kube-proxy
一、在master节点下载二进制包
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
二、选择kubernetes-server-linux-amd64.tar.gz下载
wget https://storage.googleapis.com/kubernetes-release/release/v1.16.6/kubernetes-server-linux-amd64.tar.gz
三、解压文件
tar -zxvf kubernetes-server-linux-amd64.tar.gz
or
mkdir a
tar -zxvf kubernetes-server-linux-amd64.tar.gz -C a
cp a/kubernetes/server/bin/kubectl kubernetes/bin/
cp a/kubernetes/server/bin/kube-proxy kubernetes/bin/
四、得到kubernetes目录
cd kubernetes
五、我们只需要kubernetes/server/bin以下文件:

  • kubectl
  • kube-proxy

以上是为了获取最新的k8s,所以可以去官网下载k8s安装的二进制文件,我们还需要一些其他的辅助文件:
而前面使用的k8s-node.tar.gz是可以正常运行的,可以跳过这些步骤。

[root@node1 ~]# tree kubernetes
kubernetes
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig    ---请求证书的配置文件
│   ├── kubelet.conf
│   ├── kubelet-config.yml    ---动态调整kubelet配置
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml    ---动态调整proxy配置
│   └── kube-proxy.kubeconfig    ---链接apiserver的组件
├── logs
└── ssl
4 directories, 8 files

1.1 配置文件后缀含义

.cnf #基本配置文件
.kubeconfig #链接apiserver的配置文件
.yml #主要配置文件(动态更新配置文件)

查看kubelet.conf

[root@node1 ~]# cd kubernetes/
[root@node1 kubernetes]# ls
bin  cfg  logs  ssl
[root@node1 kubernetes]# cd cfg/
[root@node1 cfg]# ls
bootstrap.kubeconfig  kubelet-config.yml  kube-proxy-config.yml
kubelet.conf          kube-proxy.conf     kube-proxy.kubeconfig
[root@node1 cfg]# cat kubelet.conf 
KUBELET_OPTS="--logtostderr=false \    ---日志
--v=2 \    ---日志级别
--log-dir=/opt/kubernetes/logs \    ---日志目录
--hostname-override=node1 \    ---节点名称,名称必须唯一,每个节点都要改一下
--network-plugin=cni \    ---启用网络插件
#指定配置文件路径
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \    ---指定为节点颁发的证书存放目录
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"    ---启动pod的镜像,这个pod镜像主要是管理pod的命名空间

查看bootstrap.kubeconfig

[root@node1 cfg]# cat bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.9.30:6443    ---master1服务器IP(内网IP)
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: c47ffb939f5ca36231d9e3121a252940    ---token要与/opt/kubernetes/cfg/token.csv 里面的token一致

k8s为了解决kubelet颁发证书的复杂性,所以引入了bootstrap机制,自动的为将要加入到集群的node颁发kubelet证书,所有链接apiserver的都需要证书。

TLS Bootstrapping 机制流程(kubelet)

查看kubelet-config.yml

[root@node1 cfg]# cat kubelet-config.yml
kind: KubeletConfiguration    ---使用对象
apiVersion: kubelet.config.k8s.io/v1beta1    ---api版本
address: 0.0.0.0    ---监听地址
port: 10250    ---当前kubelet的端口
readOnlyPort: 10255    ---kubelet暴露的端口
cgroupDriver: cgroupfs    ---驱动,要于docker info显示的驱动一致
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local     ---集群域
failSwapOn: false    ---关闭swap
#访问授权
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110

查看kube-proxy.kubeconfig

[root@node1 cfg]# cat kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem    ---指定ca
    server: https://192.168.9.30:6443    ---masterIP地址(内网)
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem

查看kube-proxy-config.yml

[root@node1 cfg]# cat kube-proxy-config.yml 
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0    ---监听地址
metricsBindAddress: 0.0.0.0:10249    ---监控指标地址
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig    ---读取配置文件
hostnameOverride: node1    ---注册到k8s的节点名称
clusterCIDR: 10.0.0.0/24
mode: ipvs    ---模式,使用ipvs(性能比较好),默认是IPtables
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true

将文件移动到工作目录

[root@node1 cfg]# cd
[root@node1 ~]# mv kubernetes/ /opt/

将service移动到工作目录

[root@node1 ~]# ls
anaconda-ks.cfg                     docker-18.09.6.tgz  kube-proxy.service
cni-plugins-linux-amd64-v0.8.2.tgz  k8s-node.tar.gz
docker                              kubelet.service
[root@node1 ~]# mv *service /usr/lib/systemd/system/

去master1主机将证书下发到node节点

[root@master1 ~]# cd TLS/k8s/
[root@master1 k8s]# ls
ca-config.json  ca-csr.json  ca.pem                kube-proxy.csr       kube-proxy-key.pem  server.csr       server-key.pem
ca.csr          ca-key.pem   generate_k8s_cert.sh  kube-proxy-csr.json  kube-proxy.pem      server-csr.json  server.pem
[root@master1 k8s]# scp ca.pem kube-proxy*pem 192.168.9.32:/opt/kubernetes/ssl/    ---只拷贝这3个即可
[email protected]'s password: 
ca.pem                                                                                                                                                     100% 1359   110.1KB/s   00:00    
kube-proxy-key.pem                                                                                                                                         100% 1679   756.4KB/s   00:00    
kube-proxy.pem

启动、加入开机启动、查看日志

[root@node1 ~]# systemctl start kubelet
[root@node1 ~]# systemctl enable kubelet
[root@node1 ~]# tail /opt/kubernetes/logs/kubelet.INFO
I0705 17:14:58.594488   19568 feature_gate.go:216] feature gates: &{map[]}
I0705 17:14:58.594518   19568 feature_gate.go:216] feature gates: &{map[]}
I0705 17:14:58.627140   19568 mount_linux.go:168] Detected OS with systemd
I0705 17:14:58.627251   19568 server.go:410] Version: v1.16.0
I0705 17:14:58.627344   19568 feature_gate.go:216] feature gates: &{map[]}
I0705 17:14:58.627380   19568 feature_gate.go:216] feature gates: &{map[]}
I0705 17:14:58.627493   19568 plugins.go:100] No cloud provider specified.
I0705 17:14:58.627516   19568 server.go:526] No cloud provider specified: "" from the config file: ""
I0705 17:14:58.627588   19568 bootstrap.go:119] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
I0705 17:14:58.630188   19568 bootstrap.go:150] No valid private key and/or certificate found, reusing existing private key or creating a new one

[root@node1 ~]# systemctl start kube-proxy
[root@node1 ~]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

去master1主机查看请求需要颁发证书的服务

[root@master1 k8s]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-rJ-YWicqqGgoH7n04Fp6EeXDqDe6Zb-e2uma0qY1AG0   14m   kubelet-bootstrap   Pending

在master1主机给请求的节点颁发证书

[root@master1 k8s]# kubectl certificate approve node-csr-rJ-YWicqqGgoH7n04Fp6EeXDqDe6Zb-e2uma0qY1AG0
certificatesigningrequest.certificates.k8s.io/node-csr-rJ-YWicqqGgoH7n04Fp6EeXDqDe6Zb-e2uma0qY1AG0 approved

在master1主机查看已经颁发的节点

[root@master1 k8s]# kubectl get node
NAME    STATUS     ROLES    AGE   VERSION
node1   NotReady      5s    v1.16.0

可以在node服务器看到颁发的证书

[root@node1 ~]# ls /opt/kubernetes/ssl
ca.pem  kubelet-client-2020-07-05-17-30-03.pem  kubelet-client-current.pem  kubelet.crt  kubelet.key  kube-proxy-key.pem  kube-proxy.pem

在其他node机器按照以上方式同样的方式部署,这里省略。
node需要部署docker、kubelet、kube-proxy。

[root@master1 k8s]# kubectl get node    ---node2配置完成
NAME    STATUS     ROLES    AGE   VERSION
node1   NotReady      35m   v1.16.0
node2   NotReady      5s    v1.16.0

2. 部署K8s集群网络(flannel)

cni是k8s的一个接口,如果需要对接k8s就需要遵循cni接口标准,部署cni主要是为了接通第三方网络;
关于第三方网络插件实现 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
关于cni和flanel安装规划:

  • cni安装到每台node节点
  • flannel安装到master节点

node节点安装cni
cni二进制包下载地址:https://github.com/containernetworking/plugins/releases

# 下载cni安装包
[root@node1 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
# 解压安装包cni
[root@node1 ~]# mkdir -p /opt/cni/bin    ---工作目录
[root@node1 ~]# mkdir -p /etc/cni/net.d    ---配置文件
[root@node1 ~]# tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin
配置完node1配置其它node。

master节点安装flannel
flannel安装教程 https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube
或者使用我们准备好的yaml文件(推荐)

# 使用准备好的yaml文件
[root@master1 ~]# ls    ---把准备好的kube-flannel.yaml上传上来
anaconda-ks.cfg  etcd  etcd.service  etcd.tar.gz  k8s-master.tar.gz  kube-flannel.yaml  TLS  TLS.tar.gz
[root@master1 ~]# cat kube-flannel.yaml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "cniVersion": "0.2.0",
      "name": "cbr0",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",    ---这个网络要与kube-controller-manager.conf的cluster-cidr的一致
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64 
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

注意
这个flannel只需要安装到master节点上。
这个文件需要翻墙,下载到服务器后直接执行 kubectl apply -f kube-flannel.yml(里面的镜像需要翻墙,直接安装国外的会失败,不建议)。
yaml里面的网络net-conf.json要和 cat /opt/kubernetes/cfg/kube-controller-manager.conf 里面的cluster-cidr值一致。
如果不使用flannel,其他的组件也一样。

# 执行ymal
[root@master1 ~]# kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
# 安装好后查看
[root@master1 ~]# kubectl get pods -n kube-system    ---(可能等待时间会比较长)
NAME                          READY   STATUS     RESTARTS   AGE
kube-flannel-ds-amd64-7k8n4   0/1     Init:0/1   0          45s
kube-flannel-ds-amd64-t47qc   0/1     Init:0/1   0          45s
提示:
1/1表示启动成功,0/1表示启动失败
[root@master1 ~]# kubectl get pods -n kube-system    ---其中一个成功了,一个失败了
NAME                          READY   STATUS     RESTARTS   AGE
kube-flannel-ds-amd64-7k8n4   1/1     Running    0          9m28s
kube-flannel-ds-amd64-t47qc   0/1     Init:0/1   0          9m28s
[root@master1 ~]# kubectl get pods -n kube-system    ---都成功了
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-7k8n4   1/1     Running   0          16m
kube-flannel-ds-amd64-t47qc   1/1     Running   0          16m
[root@master1 ~]# kubectl get node    ---发现node工作正常了,由之前的NotReady变成Ready了
NAME    STATUS   ROLES    AGE     VERSION
node1   Ready       6h43m   v1.16.0
node2   Ready       6h8m    v1.16.0

如果启动失败了,需要查看日志:

[root@master1 ~]# kubectl logs kube-flannel-ds-amd64-t47qc -n kube-system    ---可以看出没权限,因为还没授权
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-flannel-ds-amd64-t47qc)
[root@master1 ~]# ls    ---上传准备好的apiserver-to-kubelet-rbac.yaml,通过rbac授权,做集群角色的授权
anaconda-ks.cfg  apiserver-to-kubelet-rbac.yaml  etcd  etcd.service  etcd.tar.gz  k8s-master.tar.gz  kube-flannel.yaml  TLS  TLS.tar.gz
[root@master1 ~]# cat apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
[root@master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

flannel安装成功后,node节点的网络会多一个flannel网卡

[root@node1 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:da:1a:e6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.9.32/24 brd 192.168.9.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::1440:55d8:179e:da00/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:6a:99:63:69 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: dummy0:  mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:dc:35:89:4e:b7 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0:  mtu 1500 qdisc noop state DOWN group default 
    link/ether 6e:2d:11:b8:8f:69 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/32 brd 10.0.0.1 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
6: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether a6:17:07:60:56:53 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::a417:7ff:fe60:5653/64 scope link 
       valid_lft forever preferred_lft forever

在Kubernetes集群中创建一个pod,验证是否正常运行:

[root@master1 ~]# kubectl create deployment web  --image=nginx    ---创建deployment的一个副本
deployment.apps/web created
[root@master1 ~]# kubectl get pods -o wide    ---这个副本会分配到其中一个节点,当前是在node2节点上
NAME                  READY   STATUS              RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES
web-d86c95cc9-wprd8   0/1     ContainerCreating   0          6s       node2              

[root@node2 ~]# ip a    ---可以看到node2节点多了一个cni的网络,这是一个网桥,以后所有流量都会经过这个网桥,相当于一个虚拟的交换机,pod都会加入到这里面
---省略若干---
6: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether 92:11:6f:c5:e7:ea brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::9011:6fff:fec5:e7ea/64 scope link 
       valid_lft forever preferred_lft forever
7: cni0:  mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 2e:84:4f:9f:cd:fa brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 brd 10.244.1.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::2c84:4fff:fe9f:cdfa/64 scope link 
       valid_lft forever preferred_lft forever
8: veth484ffa41@if3:  mtu 1450 qdisc noqueue master cni0 state UP group default 
    link/ether ae:1e:c0:60:d2:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ac1e:c0ff:fe60:d299/64 scope link 
       valid_lft forever preferred_lft forever
# 把端口暴露到集群外部
[root@master1 ~]# kubectl expose deployment web --port=80 --type=NodePort
service/web exposed
[root@master1 ~]# kubectl get pods,svc    ---暴露的端口为31083
NAME                      READY   STATUS              RESTARTS   AGE
pod/web-d86c95cc9-wprd8   0/1     ContainerCreating   0          15m
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.0.0.1             443/TCP        2d8h
service/web          NodePort    10.0.0.114           80:31083/TCP   24s
访问成功

访问成功

两个node节点都能访问,说明集群和网络都部署成功。(可能等待时间会比较长)

3. 部署Web UI (Dashboard)

官网链接 https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
kubectl apply -f recommended.yaml

注意:默认的官方没有暴露外部端口,我们自己设置以下
nodePort: 30001
或者使用我们准备的yaml。

[root@master1 ~]# ls    ---上传dashboard.yaml
anaconda-ks.cfg                 etcd          k8s-master.tar.gz  TLS.tar.gz
apiserver-to-kubelet-rbac.yaml  etcd.service  kube-flannel.yaml
dashboard.yaml                  etcd.tar.gz   TLS
[root@master1 ~]# cat dashboard.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

[root@master1 ~]# kubectl apply -f dashboard.yaml

查看pod

[root@master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS              RESTARTS   AGE
dashboard-metrics-scraper-566cddb686-r9ddd   0/1     ContainerCreating   0          44s
kubernetes-dashboard-7b5bf5d559-22j7h        0/1     ContainerCreating   0          44s
[root@master1 ~]# kubectl get pods -n kubernetes-dashboard    ---过一段时间就可以了
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-566cddb686-r9ddd   1/1     Running   0          26m
kubernetes-dashboard-7b5bf5d559-22j7h        1/1     Running   0          26m

查看端口

[root@master1 ~]# kubectl get pods,svc -n kubernetes-dashboard    ---可以看到是30001端口,因为recommended.yaml设置了
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-566cddb686-r9ddd   1/1     Running   0          26m
pod/kubernetes-dashboard-7b5bf5d559-22j7h        1/1     Running   0          26m
NAME                                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.0.0.240           8000/TCP        26m
service/kubernetes-dashboard        NodePort    10.0.0.11            443:30001/TCP   26m

访问UI控制面板(需要用https)

访问控制面板

使用token方式来登录,创建service account并绑定默认cluster-admin管理员集群角色

选择使用Token验证
[root@master1 ~]# ls    ---上传准备好的dashboard-adminuser.yaml
anaconda-ks.cfg                 dashboard.yaml  etcd.tar.gz        TLS
apiserver-to-kubelet-rbac.yaml  etcd            k8s-master.tar.gz  TLS.tar.gz
dashboard-adminuser.yaml        etcd.service    kube-flannel.yaml
[root@master1 ~]# cat dashboard-adminuser.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

[root@master1 ~]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

获取token

[root@master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-lkmwn
Namespace:    kubernetes-dashboard
Labels:       
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 11121a55-d013-41df-a14a-ca3d2dd8f9b2

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImU4QjVxU1hadmVEdjZ5bGxKYzVSbno1TlJXRFJDdS02VGg4VzBJWXB6bnMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWxrbXduIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxMTEyMWE1NS1kMDEzLTQxZGYtYTE0YS1jYTNkMmRkOGY5YjIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.W7MeGmxjubgE69YdxMrqle11-7bxm_AeJ-qjm5TnhDASW1Z9qogxGK58WlbIW-DQolCvuDtIliIj76DUgNxkOWxcbDM4q5GV254BIx8etRGGxlTGnNFmqp2ogit7u1jX7CkQeZHEkARQFJRA1rBP9NqrqsYUhj13_xwRAqYn5OorNnMbs73jH07UEKSMF__dOABOLmre_-9jwUWANey4CkObAa-dnICvGPLa25rHG2t1INmEhynARSQmqcKzkGND44wlyHEvfMLVeWHszDJUJm7cwyoL5P2TEWdWMS6A02PNL5_b04D68mI-zo9BGlw33_X1dmzVvyJqu1voHtimAg
在登录界面填入token

访问成功

4. 部署K8S内部DNS服务

官网链接 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
用途:为service提供dns解析,部署后可以通过service名称访问该service,service会转发到pod,如果不部署dns,可以通过service的IP访问,但是耦合性较高。
安装dns

[root@master1 ~]# ls    ---上传准备好的coredns.yaml
anaconda-ks.cfg                 dashboard.yaml  k8s-master.tar.gz
apiserver-to-kubelet-rbac.yaml  etcd            kube-flannel.yaml
coredns.yaml                    etcd.service    TLS
dashboard-adminuser.yaml        etcd.tar.gz     TLS.tar.gz
[root@master1 ~]# cat coredns.yaml 
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: lizhenliang/coredns:1.2.2
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2     ---这里一定要对应node节点的/opt/kubernetes/cfg/kubelet-config.yml的clusterDNS
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

查看service

[root@master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.0.0.1             443/TCP        30h
web          NodePort    10.0.0.152           80:31724/TCP   4h41m

[root@master1 ~]# kubectl apply -f coredns.yaml    ---部署
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

查看DNS

[root@master1 ~]# kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
coredns-6d8cfdd59d-stnrc      1/1     Running   0          2m52s
kube-flannel-ds-amd64-kt8m2   1/1     Running   0          5h46m
kube-flannel-ds-amd64-stnrc   1/1     Running   0          5h22m

测试DNS可用性

[root@master1 ~]# ls    ---上传准备好的bs.yaml
anaconda-ks.cfg                 dashboard-adminuser.yaml  etcd.tar.gz        TLS.tar.gz
apiserver-to-kubelet-rbac.yaml  dashboard.yaml            k8s-master.tar.gz
bs.yaml                         etcd                      kube-flannel.yaml
coredns.yaml                    etcd.service              TLS
[root@master1 ~]# cat bs.yaml 
apiVersion: v1
kind: Pod
metadata: 
    name: busybox
    namespace: default
spec:
    containers:
      - image: busybox:1.28.4
        command:
          - sleep
          - "3600"
        imagePullPolicy: IfNotPresent
        name: busybox
    restartPolicy: Always
[root@master1 ~]# kubectl apply -f bs.yaml    ---部署
pod/busybox created

查看我们启动的pod

[root@master1 ~]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
busybox               1/1     Running   0          109s
web-d86c95cc9-kt8m2   1/1     Running   0          5h9m

进入容器内部并测试

[root@k8s-master1 ~]# kubectl exec -it busybox sh
/ # ping 10.0.0.152
PING 10.0.0.152 (10.0.0.152): 56 data bytes
64 bytes from 10.0.0.152: seq=0 ttl=64 time=0.182 ms
64 bytes from 10.0.0.152: seq=1 ttl=64 time=0.178 ms
^C
--- 10.0.0.152 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.178/0.180/0.182 ms
/ # ping web
PING web (10.0.0.152): 56 data bytes
64 bytes from 10.0.0.152: seq=0 ttl=64 time=0.027 ms
64 bytes from 10.0.0.152: seq=1 ttl=64 time=0.061 ms
^C
--- web ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
/ # nslookup web    ---解析web
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      web
Address 1: 10.0.0.152 web.default.svc.cluster.local
/ # nslookup kubernetes    ---解析kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

5. 多Master集群:部署Master2组件

多Master架构图

拷贝master1/opt/kubernetes和service文件:

[root@master2 ~]# mkdir -p /opt/etcd    ---master2创建所需的etcd目录
[root@master1 ~]# scp -r /opt/kubernetes [email protected]:/opt/
[root@master1 ~]# scp -r /opt/etcd/ssl [email protected]:/opt/etcd/
[root@master1 ~]# scp /usr/local/bin/kubectl [email protected]:/usr/bin/    ---把kubectl命令也拷贝过去

修改apiserver配置文件为本地IP:

[root@master2 ~]# cat /opt/kubernetes/cfg/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.9.30:2379,https://192.168.9.32:2379,https://192.168.9.35:2379 \
--bind-address=192.168.9.31 \    ---改成master2的IP
--secure-port=6443 \
--advertise-address=192.168.9.31 \    ---改成master2的IP
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
# 启动服务
[root@master2 ~]# systemctl start kube-apiserver
[root@master2 ~]# systemctl start kube-controller-manager
[root@master2 ~]# systemctl start kube-scheduler
[root@master2 ~]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master2 ~]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master2 ~]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
# 能正常查看说明kubectl命令没问题
[root@master2 bin]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node1   Ready       30h   v1.16.0
node2   Ready       30h   v1.16.0
[root@master2 bin]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
busybox               1/1     Running   0          24h
web-d86c95cc9-kt8m2   1/1     Running   0          29h

6. 多Master集群:部署高可用负载均衡器

LB-Master和LB-Backup同时操作

[root@LBMaster ~]# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
[root@LBMaster ~]# cat /etc/nginx/nginx.conf    ---添加一个stream
---省略若干---
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {    ---下面是APIServer的地址,也就是master1和master2的地址
                server 192.168.9.30:6443;
                server 192.168.9.31:6443;
            }
    
    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}
---省略若干---
[root@LBMaster keepalived]# systemctl stop firewalld
[root@LBMaster keepalived]# setenforce 0
[root@LBMaster keepalived]# systemctl start nginx
[root@LBMaster keepalived]# systemctl enable nginx

[root@LBMaster ~]# yum install -y keepalived
[root@LBMaster ~]# cd /etc/keepalived/
[root@LBMaster keepalived]# cat keepalived.conf      ---修改LB-Master的keepalived.conf
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 
vrrp_script check_nginx {    ---用于健康检查,检查nginx的状态
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 { 
    state MASTER     ---主机
    interface ens33    ---网卡名称要与机器的对应
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.9.38/24    ---VIP的地址
    } 
    track_script {
        check_nginx
    } 
}

[root@LBBackup keepalived]# cat keepalived.conf      ---修改LB-Backup的keepalived.conf
global_defs { 
   notification_email { 
     [email protected] 
     [email protected] 
     [email protected] 
   } 
   notification_email_from [email protected]  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 { 
    state BACKUP     ---备机
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.9.38/24
    } 
    track_script {
        check_nginx
    } 
}

[root@LBMaster keepalived]# cat check_nginx.sh     ---上传check_nginx.sh检查文件到两台LB上
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
[root@LBMaster keepalived]# chmod +x check_nginx.sh    ---添加执行权限
[root@LBMaster keepalived]# systemctl start keepalived    ---启动高可用
[root@LBMaster keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@LBMaster keepalived]# ps -ef | grep keep    ---启动成功
root      15928      1  0 23:31 ?        00:00:00 /usr/sbin/keepalived -D
root      15929  15928  0 23:31 ?        00:00:00 /usr/sbin/keepalived -D
root      15930  15928  0 23:31 ?        00:00:00 /usr/sbin/keepalived -D
root      16202  15743  0 23:32 pts/1    00:00:00 grep --color=auto keep

[root@LBMaster keepalived]# ip addr    ---可以看到VIP已经在LB-Master上启用了
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2c:e1:54 brd ff:ff:ff:ff:ff:ff
    inet 192.168.9.36/24 brd 192.168.9.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.9.38/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c51d:9a7b:614f:c2ef/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

访问正常

图片.png

图片.png

图片.png
# 把192.168.9.36(LB-Master1)的服务关掉
[root@LBMaster ~]# systemctl stop nginx
LB-Master的显示不正常

VIP的还是正常

说明VIP已经自动切换到LB-Backup上了。

# 把192.168.9.36(LB-Master1)的服务重新启用
[root@LBMaster ~]# systemctl start nginx
图片.png

图片.png

LB-Master和VIP都连接正常,如果一直ping VIP地址的话,会发现关掉服务的瞬间会丢包,过1,2秒就正常了。

将Node节点连接VIP

# 所有node节点都执行一样的操作
[root@node1 ~]# cd /opt/kubernetes/cfg/
[root@node1 cfg]# grep 192 *    ---找出需要修改的配置文件
bootstrap.kubeconfig:    server: https://192.168.9.30:6443
kubelet.kubeconfig:    server: https://192.168.9.30:6443
kube-proxy.kubeconfig:    server: https://192.168.9.30:6443
[root@node1 cfg]# sed -i 's#192.168.9.30#192.168.9.38#g' *    ---批量修改,把master1的地址改成VIP的地址
[root@node1 cfg]# grep 192 *    ---修改成功
bootstrap.kubeconfig:    server: https://192.168.9.38:6443
kubelet.kubeconfig:    server: https://192.168.9.38:6443
kube-proxy.kubeconfig:    server: https://192.168.9.38:6443
# 修改完重启服务
[root@node1 cfg]# systemctl restart kubelet
[root@node1 cfg]# systemctl restart kube-proxy

[root@LBMaster ~]# tail /var/log/nginx/k8s-access.log -f    ---查看LBMaster日志,有4个请求
192.168.9.32 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 0, 1155
192.168.9.32 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 1155
192.168.9.35 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 0, 1156
192.168.9.35 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 1155
[root@master1 ~]# kubectl get node    ---master1查看正常,说明没问题
NAME    STATUS   ROLES    AGE    VERSION
node1   Ready       3d7h   v1.16.0
node2   Ready       3d7h   v1.16.0

测试VIP是否正常工作:

[root@master1 ~]# cat /opt/kubernetes/cfg/token.csv     ---在master1查看token
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
[root@node1 cfg]# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.9.38:6443/version    ---node节点通过token测试
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}
[root@node2 cfg]# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.9.38:6443/version
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

[root@LBMaster ~]# tail /var/log/nginx/k8s-access.log -f    ---LBMaster又有新请求
192.168.9.32 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 0, 1155
192.168.9.32 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 1155
192.168.9.35 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 0, 1156
192.168.9.35 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 1155
^C
[root@LBMaster ~]# tail /var/log/nginx/k8s-access.log -f
192.168.9.32 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 0, 1155
192.168.9.32 192.168.9.30:6443 - [15/Jul/2020:23:53:58 +0800] 200 1155
192.168.9.35 192.168.9.31:6443, 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 0, 1156
192.168.9.35 192.168.9.30:6443 - [15/Jul/2020:23:54:13 +0800] 200 1155
192.168.9.32 192.168.9.30:6443 - [16/Jul/2020:00:01:34 +0800] 200 475
192.168.9.35 192.168.9.30:6443 - [16/Jul/2020:00:01:41 +0800] 200 475

你可能感兴趣的:(2020-07-14 搭建一个完整的Kubernetes集群(下))