[kubernetes]系列一:centos部署k8s:v1.15.0或最新版本及部署可视化dashboard

[kubernetes]系列一:centos部署k8s:v1.15.0

  • 环境准备
  • 开始安装
  • 集群配置
  • 检查
  • Kubernetes Dashboard安装
  • ubuntu安装k8s


参考:https://blog.csdn.net/u014692704/article/details/96746356

环境准备

  • 1.机器准备:>=2台测试机,centos(>=[2核/2G])即可
hostname ip role
k8s-master 192.168.78.22 master
k8s-node1 192.168.78.23 node1
k8s-node2 192.168.78.24 node2
  • 2.服务器环境准备:
    • 主机名称
      hostnamectl set-hostname k8s-master
      hostnamectl set-hostname k8s-node1
      hostnamectl set-hostname k8s-node2
      永久:echo "hostname=k8s-master" > /etc/sysconfig/network
    • 域名解析
    cat >> /etc/hosts <<EOF
    192.168.78.22 k8s-master
    192.168.78.23 k8s-node1
    192.168.78.24 k8s-node2
    
    EOF 
    
    • 关闭防火墙,selinux和swap。
      systemctl stop firewalld && systemctl disable firewalld;
      setenforce 0;
      sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config;
      swapoff -a;
      sed -i 's/.*swap.*/#&/' /etc/fstab;
    • 流量转发,内核配置
    #开启路由转发
    echo "1" > /proc/sys/net/ipv4/ip_forward
    #配置内核参数,将桥接的IPv4流量传递到iptables的链
    echo -e "net.bridge.bridge-nf-call-ip6tables = 1\n
    net.bridge.bridge-nf-call-iptables = 1" > /etc/sysctl.d/k8s.conf
    #更新配置
    sysctl --system
    
  • 3.配置centos国内源
    • yum源
    yum install -y wget;
    mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
    yum clean all && yum makecache
    
    • kubernetes源
    vim /etc/yum.repos.d/kubernetes.repo 
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    
    • docker源
      也可以参考我的另一篇文章,专门记录了centos/ubuntu安装指定版本docker的方法
      wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

开始安装

  • 4.主从均安装docker、kubeadm、kubectl、kubelet
#安装docker
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker version
#查看yum可安装的最新版本
for i in kubelet kubeadm kubectl;do yum info $i; done
#安装指定版本Kube*
yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.1
systemctl enable kubelet
  • ansible playbook
---
- hosts: k8s
  remote_user: root
  any_errors_fatal: true
  gather_facts: no
  tasks:
  - name: stop selinux
    shell: sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config && setenforce 0
  - name: stop swap
    shell: swapoff -a && sed -i 's/.*swap.*/#&/' /etc/fstab
  - name: change ip forward
    shell: echo "1" > /proc/sys/net/ipv4/ip_forward
  - name: check k8s conf
    shell: /usr/bin/ls /etc/sysctl.d/k8s.conf
    ignore_errors: True
    register: result
  - name: create k8s conf
    file: path=/etc/sysctl.d/k8s.conf state=touch
    when: result is failure
  - name: edit k8s conf
    shell: echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1" > /etc/sysctl.d/k8s.conf
  - name: update k8s conf
    shell: /usr/sbin/sysctl --system
  - name: add k8s yum repo
    yum_repository:
      name: kubernetes
      description: Kubernetes
      baseurl:
        - https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
      gpgcheck: yes
      gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  #- name: add docker repo
  #  shell: wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  - name: add docker repo
    get_url: url=https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo dest=/etc/yum.repos.d/docker-ce.repo
  - name: start install k8s
    yum: name={
     {
      item }} state=present
    with_items:
      - docker-ce-18.06.1.ce-3.el7
      - kubelet-1.15.0
      - kubeadm-1.15.0
      - kubectl-1.15.1
  - name: service docker
    service: name=docker state=started enabled=yes
  - name: service kubelet
    service: name=kubelet state=restarted enabled=yes

集群配置

  • 5.集群配置
    • kubeadm重设
      kubeadm reset
      systemctl daemon-reload && systemctl restart kubelet

    • master节点集群初始化
      kubeadm init --kubernetes-version=1.15.0 --apiserver-advertise-address=192.168.3.22 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap

      apiserver-advertise-address=192.168.78.22(即主节点地址)

      初始化成功后会生成token,即节点加入命令
      kubeadm join 192.168.78.22:6443 --token iqx10t.4649bhbsoku6a99e --discovery-token-ca-cert-hash sha256:aae284299a36fcc07a07b2cd2eff5cc2ac3b05f6bcdab17bce2ea26358f30804
      注意:token具有时间限制,可以通过命令kubeadm token list来查看详细信息

    • 配置kubectl命令工具
      执行kubectl get no报错
      ERROR:The connection to the server localhost:8080 was refused - did you specify the right host or port?
      系统由admin.conf文件启动kubernetes,如果不配置会起不来。翻阅书籍表示,建议改用普通用户执行kubectl更为安全

    useradd kube;
    passwd kube;
    将kube加入/etc/sudoer,赋予sudo,或将Kube加入预先创建的sudo组均可
    su - kube;
    mkdir -p $HOME/.kube;
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config;
    sudo chown $(id -u):$(id -g) $HOME/.kube/config;
    #kubectl自动补全
    yum install bash-completion -y;
    source /usr/share/bash-completion/bash_completion;
    source <(kubectl completion bash)
    echo "source <(kubectl completion bash)" >> ~/.bashrc;
    
    • 部署flannel解决pod网络,即不同节点之间的容器网络
      kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
      外网连不上的解决方案:
      https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
      也可以使用我分享的镜像,将以下内容写入flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: registry.cn-hangzhou.aliyuncs.com/dyiwen/k8s-flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

执行kubectl apply -f flannel.yaml即可

  • 子节点加入集群
    kubeadm join 192.168.78.22:6443 --token iqx10t.4649bhbsoku6a99e --discovery-token-ca-cert-hash sha256:aae284299a36fcc07a07b2cd2eff5cc2ac3b05f6bcdab17bce2ea26358f30804

    如果命令执行卡住[preflight] Running pre-flight checks
    则需要检查:
    [a].集群服务器时间是否同步
    [b].token是否过期

    节点加入成功的输出为:

    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    在master节点可运行kubectl get nodes -o wide来查看集群的详细信息


检查

[root@k8s-master ~]# kubectl get no
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   15h   v1.15.0
k8s-node1    Ready    node     94m   v1.15.0
k8s-node2    Ready    node     102m  v1.15.0

如果kubectl get no返回的结果显示节点的staus为NoReady,则需要检查flanneld的Pod是否正常运行,镜像是否拉取成功。可使用以下命令检查:
kubectl get pod --all-namespaces;
查询指定命名空间下pod的状态
kubectl describe pod [pod-name] --namespace=kube-system
给节点服务器添加Roles标签node
kubectl label nodes k8s-node1 node-role.kubernetes.io/node=


Kubernetes Dashboard安装

官方参考文档:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-the-dashboard-ui
github项目地址:
https://github.com/kubernetes/dashboard

  • 获取1.10.1版本的yaml,目前有1.8和1.10两个版本
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
  • 将yaml文件镜像改为阿里源,以及固定服务器和外网访问端口
vim kubernetes-dashboard.yaml
----
...
spec:
 nodeName: k8s-master #固定到master节点                                                                                                                                     
 containers:
      - name: kubernetes-dashboard
        #image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
...
...
  ports:
    - port: 443
      protocol: TCP
      targetPort: 8443                                                                                                                                         
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort #外网访问策略
...
------
  • 部署
    kubectl apply -f kubernetes-dashboard.yaml
  • 因k8s-dashboard通过https访问,chrome访问证书有问题需要重新自做签名证书
    mkdir dashboard-key
    cd ./dashboard-key
  1. 生成证书请求key
    openssl genrsa -out dashboard.key 2048
  2. 生成证书请求
    openssl req -days 3650 -new -out dashboard.csr -key dashboard.key -subj '/CN=192.168.78.22--部署服务器的内网地址]
  3. 生成自签证书
    openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  4. 删掉服务自建的secret或直接关闭服务
    kubectl delete secret kubernetes-dashboard-certs -n kube-system

    kubectl delete -f kubernetes-dashboard.yaml
  5. 创建同名证书
    kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
  6. 重启pod或注释掉yaml中的secret部分重新apply
    kubectl delete pod kubernetes-dashboard-dccc44c55-vzcqx
    k8s会自动重启pod

    注释:
vim kubernetes-dashboard.yaml
# ------------------- Dashboard Secret ------------------- #

#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kube-system
#type: Opaque

启动kubectl apply -f kubernetes-dashboard.yaml

  • 查看svc
[root@k8s-master pod]# kubectl get svc -n kube-system 
NAME                   TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
kube-dns               ClusterIP   10.1.0.10     <none>        53/UDP,53/TCP,9153/TCP   29h
kubernetes-dashboard   NodePort    10.1.236.41   <none>        443:30353/TCP            42m
  • 访问https://192.168.78.22:30353
    [kubernetes]系列一:centos部署k8s:v1.15.0或最新版本及部署可视化dashboard_第1张图片
  • 配置token登录账号token
vim dashboard-admin.yaml
--- 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  namespace: kube-system

kubectl apply -f dashboard-admin.yaml

  • 查询key
    kubectl describe secrets `kubectl get secrets -n kube-system | grep admin |awk {'print $1'}` -n kube-system
    [kubernetes]系列一:centos部署k8s:v1.15.0或最新版本及部署可视化dashboard_第2张图片
    [kubernetes]系列一:centos部署k8s:v1.15.0或最新版本及部署可视化dashboard_第3张图片
    将该token输入即可登录

ubuntu安装k8s

apt install selinux-utils
setenforce 0

swapoff -a

systemctl stop firewalld
systemctl disable firewalld


同时把/etc/fstab包含swap那行记录删掉。
  • 添加源
# 1. 更新源和安装必要的工具
sudo apt-get update && sudo apt-get install -y apt-transport-https

# 2. 添加国内镜像GPG证书
curl -O http://packages.faasx.com/google/apt/doc/apt-key.gpg;
apt-key add apt-key.gpg

# 3. 添加中科大镜像源
sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt/ kubernetes-xenial main
EOF

# 3. 可选阿里源 (推荐使用阿里源)
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

# 4. 更新资源
apt-get update

# 5. 查找版本
apt-cache madison xxx(比如kubeadm)

# 6. 安装
apt-get install -y xxx=[VERSION](版本号) 
apt-get install -y kubelet kubeadm kubectl
  • 阿里源DNS加速
nameserver 119.29.29.29 
nameserver 114.114.114.114 
nameserver 180.76.76.76 
nameserver 1.2.4.8
nameserver 127.0.0.53

你可能感兴趣的:(操作记录,服务搭建,kubernetes)