Kubernetes 安装

K8S 安装

环境要求

  1. 服务器:Centos7 2核2G内存

节点规划

节点 配置 说明
master 2c 2G master 节点
k8swork1 2c 2G work 节点

服务器环境准备

  1. 配置服务器主机名(各主机的服务器时间要同步)

    # master节点
    vi /etc/hostname 
    设置成 master
    # work 节点
    vi /etc/hostname
    设置成 k8swork1
    
  2. 防火墙设置

    sed -i "s/^SELINUX\=enforcing/SELINUX\=disabled/g" /etc/selinux/config
    setenforce 0 
    systemctl stop firewalld
    systemctl disable firewalld
    
  3. 关闭swap

    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    
  4. 内核参数设置

    echo "net.bridge.bridge-nf-call-ip6tables = 1" >>/etc/sysctl.conf
    echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
    echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
    sysctl -p
    
  5. 仓库配置(可复制到其他机器)

    5.1 备份旧的配置

    cd /etc/yum.repos.d/
    mkdir bak
    mv *.repo bak
    

    5.2 下载阿里云仓库

    cd /etc/yum.repos.d/
    curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    

    5.3 下载kubernetes yaml 文件

    cd /etc/yum.repos.d/
    cat < /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  6. docker 配置

    # 安装docker
    yum install -y docker 
    # 设置docker 开机自动启动
    systemctl enable docker.service
    # 启动docker
    service docker restart
    

安装k8s组件

  1. 安装组件

    yum install -y kubelet kubeadm kubectl kubernetes-cni
    
  2. 因为k8s的组件在启动时,会依赖于 gcr.io 下的很多镜像,国内访问不了,先提前下载这些镜像

    # 可以通过以下命令来看 依赖的几个组件的版本要求
    kubeadm config images list
    
    # 输出以下内容
    k8s.gcr.io/kube-apiserver:v1.18.20
    k8s.gcr.io/kube-controller-manager:v1.18.20
    k8s.gcr.io/kube-scheduler:v1.18.20
    k8s.gcr.io/kube-proxy:v1.18.20
    k8s.gcr.io/pause:3.2
    k8s.gcr.io/etcd:3.4.3-0
    k8s.gcr.io/coredns:1.6.7
    
  3. 因为访问不了外网,可以从dockerhub 上搜索对应的组件进行使用

    docker pull gotok8s/kube-apiserver:v1.18.5
    docker pull gotok8s/kube-controller-manager:v1.18.5
    docker pull gotok8s/kube-scheduler:v1.18.5
    docker pull gotok8s/kube-proxy:v1.18.5
    docker pull gotok8s/pause:3.2
    docker pull gotok8s/etcd:3.4.3-0
    docker pull gotok8s/coredns:1.6.7
    
    docker tag docker.io/gotok8s/kube-proxy:v1.18.5 k8s.gcr.io/kube-apiserver:v1.18.20
    docker tag docker.io/gotok8s/kube-controller-manager:v1.18.5 k8s.gcr.io/kube-controller-manager:v1.18.20
    docker tag docker.io/gotok8s/kube-scheduler:v1.18.5 k8s.gcr.io/kube-scheduler:v1.18.20
    docker tag docker.io/gotok8s/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
    docker tag docker.io/gotok8s/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
    docker tag docker.io/gotok8s/pause:3.2 k8s.gcr.io/pause:3.2
    
  4. 开始通过 kubeadm 安装master

     kubeadm init --apiserver-advertise-address=192.168.136.133 --kubernetes-version v1.18.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers
    

    如果没有异常的情况下,可以看到 master 创建好了

    但是正常是会有 master 处理 NotReady的情况,因为k8s有依赖于网络组件,会发现 coredns 组件一直是处于未启动的状态

  5. 安装 flannel

    5.1 下载 flannel.yaml 文件,可拷贝如下内容:

    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.168.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.12.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.12.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    

    5.2 执行安装

    kubectl apply -f flannel.yaml
    

    执行查看flannel 启动情况

    kubectl get pods --all-namespaces
    

    输出

    NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
    kube-system   coredns-7ff77c879f-fzfgj           1/1     Running   0          12h
    kube-system   coredns-7ff77c879f-mc6h4           1/1     Running   0          12h
    kube-system   etcd-master                        1/1     Running   0          12h
    kube-system   kube-apiserver-master              1/1     Running   0          12h
    kube-system   kube-controller-manager-master     1/1     Running   3          12h
    kube-system   kube-flannel-ds-amd64-nc6wh        1/1     Running   0          11h
    # flannel 是daemonset ,在所以节点都会启动
    kube-system   kube-flannel-ds-amd64-nl59d        1/1     Running   0          11h
    kube-system   kube-proxy-fg6l5                   1/1     Running   2          12h
    kube-system   kube-proxy-ph5m6                   1/1     Running   0          12h
    kube-system   kube-scheduler-master              1/1     Running   3    12h
    
    
  6. 至此,查看Node节点的状态

    [root@master deployment]# kubectl get nodes | grep master
    master     Ready    master   12h   v1.18.0
    
  7. work 节点加入

    kubeadm join 192.168.136.133:6443 --token yy2huh.9e20jcil00z4rhwf     --discovery-token-ca-cert-hash sha256:3336bb808ec8b8f1d1482a52cbfee2f2cb8252b1902b7dcf83df191d1e7ca669
    

    注意:

    1. token 的生成

      在master 上 kubeadm token list 查看目前有效的token

      如果没有,通过 kubeadm token create 进行创建

    2. discovery-token-ca-cert-hash 如果没有,在master 上执行以下方式生成

      openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |  openssl dgst -sha256 -hex | sed 's/^.* //'
      
    3. 如果加入后,work节点出现 NotReady的情况

      # 首先确认not ready 的原因
      kubectl describe node k8swork1
      
      Conditions:
        Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
        ----                 ------  -----------------                 ------------------                ------                       -------
        NetworkUnavailable   False   Tue, 14 Sep 2021 22:22:28 +0800   Tue, 14 Sep 2021 22:22:28 +0800   FlannelIsUp                  Flannel is running on this node
        MemoryPressure       False   Wed, 15 Sep 2021 09:50:21 +0800   Wed, 15 Sep 2021 09:24:39 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
        DiskPressure         False   Wed, 15 Sep 2021 09:50:21 +0800   Wed, 15 Sep 2021 09:24:39 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
        PIDPressure          False   Wed, 15 Sep 2021 09:50:21 +0800   Wed, 15 Sep 2021 09:24:39 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
        Ready                false    Wed, 15 Sep 2021 09:50:21 +0800   Wed, 15 Sep 2021 09:24:39 +0800   KubeletNotReady                 错误原因消息
      
      

      注:确认问题原因

      一般是因为在work 节点上不能下载flannel 镜像问题,导致网络异常

      通过在work节点上找到对应的容器,确认启动问题,而在本地测试的时候,是因为防火墙的问题,导致flannel在启动时出现异常

      重新设置下

      ## 如果已经按步骤下来,在环境准备第2步已经做了
      vi /etc/selinux/config 
      # 设置
      SELINUX=disabled
      
  8. 查看节点状态

    kubectl get nodes
    
    [root@master deployment]# kubectl get nodes
    NAME       STATUS   ROLES    AGE   VERSION
    k8swork1   Ready       12h   v1.18.0
    master     Ready    master   12h   v1.18.0
    

    通过以上,说明节点都已经Ready 状态

你可能感兴趣的:(Kubernetes 安装)