Kubeadm快速部署k8s集群

环境准备:

k8s-master ---- 192.168.1.199 --2CPU2核,内存2G,Centos7.x
k8s-node1 ---- 192.168.1.220-- 2CPU2核,内存4G,Centos7.x
k8s-node2 ----- 192.168.1.221--2CPU2核,内存4G,Centos7.x

一.系统初始化:

使用ansible批量操作:

cat /etc/ansible/hosts
[k8s]
192.168.1.199 name=k8s-master
192.168.1.220 name=k8s-node1
192.168.1.221 name=k8s-node2

playbook.yml

---
- hosts: k8s
  gather_facts: no
  tasks:
    - name: 关闭防火墙
      systemd: name=firewalld state=stopped enabled=no
    - name: 关闭sellinux
      shell: sed -i 's/enforcing/disabled/' /etc/selinux/config
    - name: 关闭swp
      shell: sed -ri 's/.*swap.*/#&/' /etc/fstab
    - name:  时间同步
      yum: name=ntpdate state=installed
    - name: 执行时间同步
      shell: ntpdate time.windows.com
    - name: 拷贝修改参数脚本
      copy: src=edit_sysctl.sh dest=/root/
    - name: 运行脚本
      shell: sh /root/edit_sysctl.sh
    - name: 设置主机名
      tags: hostname
      shell: hostnamectl set-hostname {{ name }}

edit_sysctl.sh

#!/bin/bash
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

所有主机改下hosts文件

vim /etc/hosts
192.168.1.199 k8s-master
192.168.1.220 k8s-node1
192.168.1.221 k8s-node2

系统初始工作完成后,最好重启下系统,让刚才关闭swap生效。

二.安装Docker/kubeadm/kubelet【所有节点】

Docker
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum -y install docker-ce
$ systemctl enable docker && systemctl start docker

配置加速器
$ cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

重启docker
$ systemctl restart docker
$ docker info
添加阿里云YUM软件源
 cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

$ yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
$ systemctl enable kubelet

三.部署Kubernets Master

1.在master(119)执行初始化命令

kubeadm init \
  --apiserver-advertise-address=192.168.1.199 \
  --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
  --kubernetes-version v1.19.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all

或者编辑配置文件vi kubeadm.conf ,将以上内容配置好,在引用执行kubeadm init --config kubeadm.conf --ignore-preflight-errors=all

参数说明:

  • --apiserver-advertise-address 集群通告地址
  • --image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
  • --kubernetes-version K8s版本,与上面安装的一致
  • --service-cidr 集群内部虚拟网络,Pod统一访问入口
  • --pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致


    image.png

2.拷贝kubectl使用的连接k8s认证文件到默认路径:

bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.查看节点状态

$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   NotReady    master   2m   v1.18.0

四.Worker Node加入集群

向集群添加新节点,执行在kubeadm init 最后输出的kubeadm join命令:

kubeadm join 192.168.1.199:6443 --token brjfr2.xtp3ytjalcrw7div \
>     --discovery-token-ca-cert-hash sha256:fbaf11091afe3792277492d7253fa7dfb81293bdc6ecaac7df0f6213ec4d43c5 
image.png

五.部署网络CNI

现在还是NotReady状态,一般这种情况先看kubelet的日志来找问题。
以为现在还没部署网络,所以肯定是网络问题啦。
k8s-master kubelet[6737]: W0603 15:52:00.865681 6737 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d

Calicoa组件

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。
Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。
此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

部署Calicoa

下载calico的yaml

wget https://docs.projectcalico.org/manifests/calico.yaml

修改 calico.yaml中的 ip网段


image.png

这是将默认的192.168.网段改成10.244.0.0/16(最开始初始化kubeadm init的时候指定的)

加载组件

kubectl apply -f calico.yml

稍等一下,查看集群节点状态和pod状态,全部正常。

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   62m     v1.19.0
k8s-node1    Ready       24m     v1.19.0
k8s-node2    Ready       6m52s   v1.19.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-7kwqm   1/1     Running   0          8m38s
calico-node-bqv2g                         1/1     Running   0          8m38s
calico-node-jgxln                         1/1     Running   0          8m38s
calico-node-t2qh4                         1/1     Running   0          6m55s
coredns-6d56c8448f-9mgzw                  1/1     Running   0          62m
coredns-6d56c8448f-s6z7b                  1/1     Running   0          62m
etcd-k8s-master                           1/1     Running   0          62m
kube-apiserver-k8s-master                 1/1     Running   0          62m
kube-controller-manager-k8s-master        1/1     Running   0          62m
kube-proxy-hqvkk                          1/1     Running   0          6m55s
kube-proxy-nc6l8                          1/1     Running   0          24m
kube-proxy-psfcn                          1/1     Running   0          62m
kube-scheduler-k8s-master                 1/1     Running   0          62m

注意:
如果执行kubeadm失败了就会出现一些错误信息,然后解决了,再次使用kubeadm init来初始化也不会成功,因为第一次运行环境是错误的环境。这个需要将清理当前环境,保持一个纯净的环境再去执行初始化

1.清空当前初始化环境

kubeadm reset

2.calico pod未准备就绪,那么需要每个节点手动拉取镜像看是否拉取到

grep image calico.yaml 每个节点拉取看看是否快

docker pull calico/xxx

六.测试k8s集群

  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
image.png

image.png

七.部署Dashboard UI

[root@k8s-master ~]# cat kubernertes-dashboard.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

加载组件

kubectl apply -f  kubernertes-dashboard.yaml 

查看pod状态

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-v79bx   1/1     Running   0          2m52s
kubernetes-dashboard-5dbf55bd9d-hblj7        1/1     Running   0          2m52s
访问: https://任意一NodeIP:30001

创建service account并绑定默认cluster-admin管理员集群角色:

# 创建用户
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 获取用户Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

使用输出的token登录Dashboard。


image.png

整套部署下来遇到的问题:


image.png

排查半天,感觉是权限问题,然后从新清空环境部署,问题依旧。
最后打算看实时日志,就新开了一个终端,又执行了一次apply,重新加载组件。。。居然解决了。。。

你可能感兴趣的:(Kubeadm快速部署k8s集群)