k8s的集群大多数情况需要对外服务,而服务暴露的方式有很多,这里选取traefik来讲解,其它就暂时不详细介绍,包括Istio,代后续更新后再补充上来。一句话,来不及解释,赶紧上车。

Traefkik简介

Traefik是开源边缘路由器,类似 nginx、apache 那样的反向代理服务器、网关,代表系统接收请求,并找出负责处理这些请求的组件。Traefik自动发现适合您服务的配置,同时热更新支持多种负载均衡算法断路器,重试提供监控、管理 UI 界面用 go 语言开发,天然的拥抱 k8s

Traefik 2.0 几个值得关注的功能

  • 使用 CRD 来完成之前 Ingress + 注解的功能
  • 支持多协议的 TCP 端口路由
  • 引入了 MiddleWare,使用中间件完全自定义路由
  • 金丝雀发布

部署Traefik

1.部署规划

1.1 Ingress部署的几种方式

  • Deployment+LoadBalance:适用于公网,因为公网提供LoadBalance,当然如果自己在内网架设一个LoadBalancer也可以。
  • Deployment+NodePort:Ingress暴露在集群节点的特定端口,但需要一个前端负载均衡。
  • DaemonSet+HostNetwork+nodeSelect:部署在特定的节点上,使用hostPort直接打通宿主机,此时IngressController所在的节点类似传统架构的边缘节点,性能比NodePort好,缺点是一个节点只能部署一个ingress-controller pod。

1.2 现有部署环境说明

本人的k8s环境是一台华为TaiShan2280v2的ARM服务器+多台x86_64服务器,TaiShan2280v2服务器作为master节点,同时运行traefik组为边界路由和负载均衡器(本人的应用需求没有那么高,想充分利用这台新近的ARM服务器,高可用方面暂时没有考虑,后续会逐渐跟上),而其它服务器则只作为node节点。Traefik作为边界路由、负载均衡,采用了hostport+特定节点的方式进行部署。

2.部署CRD资源

比较固定,使用官方文档即可

2.1 创建traefik-crd.yaml文件

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingre***outes.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Ingre***oute
    plural: ingre***outes
    singular: ingre***oute
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingre***outetcps.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Ingre***outeTCP
    plural: ingre***outetcps
    singular: ingre***outetcp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: traefikservices.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TraefikService
    plural: traefikservices
    singular: traefikservice
  scope: Namespaced

2.2 创建CRD(CustomResourceDefinition)资源

$ kubectl apply -f traefik-crd.yaml

3.创建 RBAC 权限

Kubernetes 在 1.6 版本中引入了基于角色的访问控制(RBAC)策略,方便对 Kubernetes 资源和 API 进行细粒度控制。Traefik 需要一定的权限,所以这里提前创建好 Traefik ServiceAccount 并分配一定的权限。

3.1 创建 traefik-rbac.yaml 文件

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: default
  name: traefik-ingress-controller
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller

rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingre***outes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingre***outetcps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - tlsoptions
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - traefikservices
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: default

3.2 创建RBAC资源

$ kubectl apply -f traefik-rbac.yaml

4.节点设置(可选)

4.1 标签

因为我的集群里面有ARM服务器,想让traefik运行在特定的节点上,该节点也作为负载均衡和路由节点,因此需要设置标签,当然我这里的环境也可以不用设置,因为ARM服务器只有一台),直接利用系统的标签也可以。

设置标签

$ kubectl label node taishan2280v2 IngressProxy=true

查看标签

$ kubectl get nodes --show-lables

通过查看标签命令可以看见默认设置的标签,可以自己加以利用。

4.2 设置污点taints和容忍tolerations

4.2.1 查看污点 taint

污点是设置在 Node 节点上,所以我们可以通过查看节点信息来查找该节点是否设置污点以及对污点的信息。查看节点名称如下:

  • 语法:kubectl describe nodes [节点名]
$ kubectl describe node 

显示节点信息如下:

Name: taishan2280v2
Labels: IngressProxy=true
        beta.kubernetes.io/arch=arm64
        beta.kubernetes.io/os=linux
        kubernetes.io/arch=arm64
        kubernetes.io/hostname=taishan2280v2
        kubernetes.io/os=linux
        node-role.kubernetes.io/master=
...
Taints: node-role.kubernetes.io/master:NoSchedule     #污点信息

4.2.2 设置污点

  • 语法:kubectl taint node [node] key=value[effect]
    其中 effect 可取值: NoSchedule | PreferNoSchedule | NoExecute 功能如下:

  • PreferNoSchedule: 尽量不要调度。
  • NoSchedule: 一定不能被调度。
  • NoExecute: 不仅不会调度, 还会驱逐 Node 上已有的 Pod。
$ kubectl taint node node-role.kubernetes.io/master:NoSchedule

4.2.3 设置容忍 tolerations

如果计划调度的节点有污点设置,则部署的时候需要相应的设置,以下给出部署文件中的关键片段,详见下面的部署文件。

...
  tolerations:
    - key: "node-role.kubernetes.io/master"
    operator: "Exists"
    effect: "NoSchedule"
...

对应key和effect相同就可以了。

4.2.4 参考

关于污点taints和容忍tolerations,详见参考文档(待写:))

5.部署Traefik

部署Traefik采用Daemonset+hostPort方式(兼顾后期hostPort+特定节点)

5.1 创建traefik-deploy.yaml

# 创建和RBAC相关的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: default
  name: traefik-ingress-controller

---
# 创建traefik服务
apiVersion: v1
kind: Service
metadata:
  name: traefik

spec:
  ports:
    - protocol: TCP
      name: web
      port: 8000
    - protocol: TCP
      name: admin
      port: 8080
    - protocol: TCP
      name: websecure
      port: 4443
  selector:
    app: traefik

---
# 创建traefik的daemonset
kind: DaemonSet
apiVersion: apps/v1
metadata:
  namespace: default
  name: traefik
  labels:
    app: traefik

spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-ingress-controller
      containers:
        - name: traefik
          image: traefik:v2.0.7
          args:
            - --api.insecure
            - --accesslog
            - --entrypoints.web.Address=:8000
            - --entrypoints.websecure.Address=:4443
            - --providers.kubernetescrd
            - --certificatesresolvers.default.acme.tlschallenge
            - [email protected]
            - --certificatesresolvers.default.acme.storage=acme.json
            # Please note that this is the staging Let's Encrypt server.
            # Once you get things working, you should remove that whole line altogether.
            - --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
          ports:
            - name: web
              containerPort: 8000
              hostPort: 8000
            - name: websecure
              containerPort: 4443
              hostPort: 4443
            - name: admin
              containerPort: 8080
          resources:
            limits:
              cpu: 2000m
              memory: 1024Mi
            requests:
              cpu: 1000m
              memory: 1024Mi
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
      tolerations: #设置容忍所有污点,防止节点被设置污点
        - operator: "Exists"
      nodeSelector: #设置node筛选器,在特定label的节点上启动
        IngressProxy: "true"

注意:此处的镜像用的是traefik:v2.0.7,写这篇文章的时候2.1.1也出来了,我自己也调试过,换个版本有问题!

5.2 部署traefik

kubectl apply -f traefik-deploy.yaml

6.配置Traefik路由规则

6.1 配置 HTTP 路由规则 (Traefik Dashboard 为例)

Traefik 应用已经部署完成,但是想让外部访问 Kubernetes 内部服务,还需要配置路由规则,这里开启了 Traefik Dashboard 配置,所以首先配置 Traefik Dashboard 看板的路由规则,使外部能够访问 Traefik Dashboard。

6.1.1 创建 Traefik Dashboard 路由规则文件 traefik-dashboard-route.yaml

apiVersion: traefik.containo.us/v1alpha1
kind: Ingre***oute
metadata:
  name: traefik-dashboard-route
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`traefik`)
      kind: Rule
      services:
        - name: traefik
          port: 8080

6.1.2 创建 Traefik Dashboard 路由规则对象

$ kubectl apply -f traefik-dashboard-route.yaml

接下来配置 Hosts,客户端想通过域名访问服务,必须要进行 DNS 解析,由于这里没有 DNS 服务器进行域名解析,所以修改 hosts 文件将 Traefik 指定节点的 IP 和自定义 host 绑定。我自己的如下:

172.17.1.254  traefik 

配置完成后,打开浏览器输入地址:http://traefik 打开 Traefik Dashboard。

6.2 配置 HTTPS 路由规则(Kubernetes Dashboard 为例)

Kubernetes 的 Dashboard 看板,它是 Https 协议方式,由于它是需要使用 Https 请求,所以我们配置基于 Https 的路由规则并指定证书。

6.2.1 下载部署kubernetes dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml

离线部署注意准备好镜像,同时,我的kubernetes dashboard部署在ARM服务器上,还需要更改yaml文件中的image、nodeSelect,同时注意污点和容忍。以下是文件中我更改的关键部分:

recommended.yaml

---
kind: Deployment
...
spec:
...
  template:
      ...
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta8
          imagePullPolicy: IfNotPresent
      ...
      nodeSelector:
        "kubernetes.io/arch": arm64
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
...
---
kind: Deployment
...
spec:
  ...
  template:
    ...
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.2
          imagePullPolicy: IfNotPresent
      ...
      nodeSelector:
        "kubernetes.io/arch": arm64
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

部署kubernetes-dashboard

$ kubectl appply -f recommended.yaml

对于离线环境,还需要使用docker save和docker load命令进行导出和导入对应的镜像文件,其过程可以参看前面章节写的离线部署kubernetes。

6.2.2 创建自签名证书

$ openssl req -x509 -nodes -days 36500 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=kubernetes-dashboard-admin"

6.2.3 将证书存储到 Kubernetes Secret 中

$ kubectl create secret generic kubernetes-dashboard-admin --from-file=tls.crt --from-file=tls.key -n kubernetes-dashboard

6.2.4 创建kubernetes-dashboard的账户和相关绑定。

  • 创建文件kubernetes-dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
   k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin-bind-cluster-role
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
  • 创建账户和绑定
$ kubectl apply -f kubernetes-dashboard-admin.yaml -n kubernetes-dashboard

6.2.5 创建 Traefik Dashboard 路由规则文件 kubernetes-dashboard-route.yaml

apiVersion: traefik.containo.us/v1alpha1
kind: Ingre***oute
metadata:
  name: kubernetes-dashboard-route
spec:
  entryPoints:
    - websecure
  tls:
    secretName: kubernetes-dashboard-admin
  routes:
    - match: Host(`kubernetes-dashboard`) 
      kind: Rule
      services:
        - name: kubernetes-dashboard
          port: 443

6.2.6 创建 Kubernetes Dashboard 路由规则对象

$ kubectl apply -f kubernetes-dashboard-route.yaml -n kubernetes-dashboard

6.2.7 查看token并访问

  • 查看记录token
kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kubernete-dashboard
  • 配置 Hosts 文件,使用token进行访问
172.17.1.254  kubernetes-dashboard

配置完成后,打开浏览器输入地址:https://kubernetes-dashboard.

6.2.8 部署metrics-server

在kubernetes-dashboard中看不见CPU和内存的使用情况,原因:没有部署metrics-sever。

  • 下载部署文件

根据部署的kubernete版本选择对应的部署文件,地址1.8+

$ mkdir metrics-server
$ cd metrics-server
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/aggregated-metrics-reader.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-delegator.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/auth-reader.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-apiservice.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-deployment.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/metrics-server-service.yaml
$ wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/resource-reader.yaml
  • 修改部署文件metrics-server-deployment.yaml
...
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-arm64:v0.3.6
...
        imagePullPolicy: IfNotPresent #修改
...
        command: #增加
        - /metrics-server
        - --kubelet-preferred-address-types=InternalIP
        - --kubelet-insecure-tls
...
      nodeSelector:
        "kubernetes.io/arch": arm64
        "kubernetes.io/os": linux
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
  • 准备镜像文件

方法1(需要合理上网)

docker pull k8s.gcr.io/metrics-server-arm64:v0.3.6

方法2

docker pull rancher/metrics-server:v0.3.6
docker tag rancher/metrics-server:0.3.6 k8s.gcr.io/metrics-server-arm64:v0.3.6

说明:

  1. 如果直接重dokcerhub下载rancher的metrics-server,其可以根据你的下载主机的架构类型自行选择镜像架构,比如使用kvm+qemu模拟arm64的主机下载arm64的镜像。
  2. 如果是离线环境安装还需要做镜像的导入和导出。
  • 安装metrics-server
kubectl apply -f ./

"./"为之前创建的metrics-server的目录,里面存放的有部署需要的所有yaml文件。