Kubernetes集群环境搭建&部署Dashboard

1、Kubernetes集群搭建

本次环境搭建需要三台CentOS服务器(一主二从),然后在每台服务器中分别安装Docker(18.06.3)、kubeadm(1.18.0)、kubectl(1.18.0)和kubelet(1.18.0)

三台主机配置信息如下:

角色 IP地址 操作系统 配置
Master 192.168.56.20 CentOS7.5+ 2C2G
Node1 192.168.56.21 CentOS7.5+ 2C2G
Node2 192.168.56.22 CentOS7.5+ 2C2G
1)、环境初始化(所有节点都要操作)

1)检查操作系统的版本

检查操作系统的版本(要求操作系统的版本至少在7.5以上):

cat /etc/redhat-release

在这里插入图片描述

2)关闭防火墙和禁止防火墙开机启动

关闭防火墙:

systemctl stop firewalld

禁止防火墙开机启动:

systemctl disable firewalld

3)设置主机名

设置主机名:

hostnamectl set-hostname <hostname>
  • 设置192.168.56.20的主机名:
hostnamectl set-hostname k8s-master
  • 设置192.168.56.21的主机名:
hostnamectl set-hostname k8s-node1
  • 设置192.168.56.22的主机名:
hostnamectl set-hostname k8s-node2

4)主机名解析

cat >> /etc/hosts << EOF
192.168.56.20 k8s-master
192.168.56.21 k8s-node1
192.168.56.22 k8s-node2
EOF

5)时间同步

K8s要求集群中的节点时间必须精确一致,所以在每个节点上添加时间同步:

yum install ntpdate -y
ntpdate time.windows.com

6)关闭selinux

查看selinux是否开启:

getenforce

永久关闭selinux,需要重启:

sed -i 's/enforcing/disabled/' /etc/selinux/config

7)关闭swap分区

永久关闭swap分区,需要重启:

sed -ri 's/.*swap.*/#&/' /etc/fstab

8)将桥接的IPv4流量传递到iptables的链

在每个节点上将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

加载br_netfilter模块:

modprobe br_netfilter

查看是否加载br_netfilter模块:

lsmod | grep br_netfilter

生效:

sysctl --system

9)开启ipvs

在K8s中service有两种代理模型,一种是基于iptables,另一种是基于ipvs的。ipvs的性能要高于iptables的,但是如果要使用它,需要手动载入ipvs模块

安装ipset和ipvsadm:

yum -y install ipset ipvsadm

执行如下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

授权、运行、检查是否加载:

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

检查是否加载:

lsmod | grep -e ipvs -e nf_conntrack_ipv4

10)重启三台机器

reboot
2)、安装Docker、kubeadm、kubelet和kubectl(所有节点都要操作)

1)安装Docker

yum -y install wget
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io
systemctl enable docker && systemctl start docker
docker version

设置Docker镜像加速器:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts": ["native.cgroupdriver=systemd"],	
  "registry-mirrors": ["https://du3ia00u.mirror.aliyuncs.com"],	
  "live-restore": true,
  "log-driver":"json-file",
  "log-opts": {"max-size":"500m", "max-file":"3"},
  "storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2)添加阿里云的yum软件源

由于K8s的镜像源在国外,这里切换成国内的阿里云镜像源:

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3)安装kubeadm、kubelet和kubectl

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

为了实现Docker使用的cgroup drvier和kubelet使用的cgroup drver一致,建议修改/etc/sysconfig/kubelet文件的内容:

vi /etc/sysconfig/kubelet
# 修改
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

设置为开机自启动即可,由于没有生成配置文件,集群初始化后自动启动:

systemctl enable kubelet
3)、部署K8s Master

部署K8s的Master节点(192.168.56.20):

# 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里需要指定阿里云镜像仓库地址 apiserver-advertise-address对应的IP为Master节点的IP
kubeadm init \
  --apiserver-advertise-address=192.168.56.20 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

Kubernetes集群环境搭建&部署Dashboard_第1张图片
Kubernetes集群环境搭建&部署Dashboard_第2张图片

根据提示消息,在Master节点上使用kubectl工具:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4)、部署K8s Node

根据提示,在两台Node节点(192.168.56.21和192.168.56.22)上执行如下的命令:

kubeadm join 192.168.56.20:6443 --token brmcna.yw1svs0vp4qqz1fm \
    --discovery-token-ca-cert-hash sha256:921bea5a17d797b228e048316dada19e21e24a0187abce996c7d06d0fe6c831e

Kubernetes集群环境搭建&部署Dashboard_第3张图片

默认的token有效期为24小时,当过期之后,该token就不能用了,这时可以使用如下的命令创建token:

kubeadm token create --print-join-command

在这里插入图片描述

5)、部署CNI网络插件

在Master节点上使用kubectl工具查看节点状态:

kubectl get node

Kubernetes集群环境搭建&部署Dashboard_第4张图片

K8s支持多种网络插件,比如flannel、calico、canal等,这里使用的flannel

在Master节点上获取flannel配置文件:

kube-flannel.yml内容如下:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

注意:由于我使用的CentOS服务器是多网卡,需要在配置文件中指定内网网卡,这里的网卡是enp0s8,修改内容如下

      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=enp0s8

使用配置文件启动flannel:

kubectl apply -f kube-flannel.yml

查看部署CNI网络插件进度:

kubectl get pod -n kube-flannel

当所有pod状态都为Running时即安装完成

Kubernetes集群环境搭建&部署Dashboard_第5张图片

再次查看节点状态,此时所有Node状态都为Ready:

kubectl get node

Kubernetes集群环境搭建&部署Dashboard_第6张图片

6)、测试K8s集群

在K8s集群中部署一个Nginx,测试下集群是否正常工作

创建deployment:

kubectl create deployment nginx --image=nginx:1.14-alpine

暴露NodePort端口:

kubectl expose deployment nginx --port=80 --type=NodePort

查看服务状态:

kubectl get pods,svc -o wide

在这里插入图片描述

可以看到Nginx的Pod部署在k8s-node2节点(192.168.56.22),映射的NodePort为32296,使用浏览器访问http://192.168.56.22:32296/会看到Nginx欢迎页

Kubernetes集群环境搭建&部署Dashboard_第7张图片

2、部署Dashboard

1)、下载yaml,并运行Dashboard

1)下载yaml

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

2)修改kubernetes-dashboard的Service类型

vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort  # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30009  # 新增
  selector:
    k8s-app: kubernetes-dashboard

3)部署

kubectl apply -f recommended.yaml

4)查看namespace下的kubernetes-dashboard下的资源

kubectl get pod,svc -n kubernetes-dashboard -o wide

在这里插入图片描述

可以看到kubernetes-dashboard的Pod部署在k8s-node1节点(192.168.56.21),映射的NodePort为30009,使用浏览器访问https://192.168.56.21:30009/会看到kubernetes-dashboard的登录页

2)创建访问账户,获取token

1)创建账号

kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

2)授权

kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin

3)获取账号token

kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin

在这里插入图片描述

kubectl describe secrets dashboard-admin-token-kqhc7 -n kubernetes-dashboard

Kubernetes集群环境搭建&部署Dashboard_第8张图片

在登录页面上输入上面的token

Kubernetes集群环境搭建&部署Dashboard_第9张图片

登录后,看到如下页面:

Kubernetes集群环境搭建&部署Dashboard_第10张图片

你可能感兴趣的:(#,Kubernetes,Kubernetes集群搭建,部署K8s,Dashboard)