Traefik
Traefik是一个用Golang开发的轻量级的Http反向代理和负载均衡器。由于可以自动配置和刷新backend节点,目前可以被绝大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由于traefik会实时与Kubernetes API交互,所以对于Service的节点变化,traefik的反应会更加迅速。总体来说traefik可以在Kubernetes中完美的运行.
Traefik 还有很多特性如下:
- 速度快
- 不需要安装其他依赖,使用 GO 语言编译可执行文件
- 支持最小化官方 Docker 镜像
- 支持多种后台,如 Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS 等等
- 支持 REST API
- 配置文件热重载,不需要重启进程
- 支持自动熔断功能
- 支持轮训、负载均衡
- 提供简洁的 UI 界面
- 支持 Websocket, HTTP/2, GRPC
- 自动更新 HTTPS 证书
- 支持高可用集群模式
接下来我们使用 Traefik 来替代 Nginx + Ingress Controller 来实现反向代理和服务暴漏。
那么二者有什么区别呢?简单点说吧,在 Kubernetes 中使用 nginx 作为前端负载均衡,通过 Ingress Controller 不断的跟 Kubernetes API 交互,实时获取后端 Service、Pod 等的变化,然后动态更新 Nginx 配置,并刷新使配置生效,来达到服务自动发现的目的,而 Traefik 本身设计的就能够实时跟 Kubernetes API 交互,感知后端 Service、Pod 等的变化,自动更新配置并热重载。大体上差不多,但是 Traefik 更快速更方便,同时支持更多的特性,使反向代理、负载均衡更直接更高效。
1.Role Based Access Control configuration (Kubernetes 1.6+ only)
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
授权,官方文档不懂下下来看文档
2.Deploy Træfik using a Deployment or DaemonSet
- To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with kubectl:
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml 此模板有些问题,我先用ds模板
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
deployment和ds的区别:ds会在每台node上都创造一个pod.而deploy是人为控制的副本。如果几台很多了,没有必要用ds,比如100台 会造100个pod,没有意义。自己用ds模板改下,kind: Deployment
如下
- 直接找到DS模板吧kind改成deploy模式
- kind: Deployment
3.Check the Pods
- # kubectl --namespace=kube-system get pods -o wide
- traefik-ingress-controller-79877bbc66-p29jh 1/1 Running 0 32m 10.249.243.182 k8snode2-175v136
查找一下在那台服务器上,deploy会随机分配一台服务器
4.Ingress and UI
- kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml.
自己再造个web测试用
- apiVersion: v1
- kind: Service
- metadata:
- name: nginx-svc
- spec:
- template:
- metadata:
- labels:
- name: nginx-svc
- namespace: default
- spec:
- selector:
- run: ngx-pod
- ports:
- - protocol: TCP
- port: 80
- targetPort: 80
- ---
- apiVersion: apps/v1beta1
- kind: Deployment
- metadata:
- name: ngx-pod
- spec:
- replicas: 4
- template:
- metadata:
- labels:
- run: ngx-pod
- spec:
- containers:
- - name: nginx
- image: nginx:1.10
- ports:
- - containerPort: 80
- ---
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: ngx-ing
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: www.ha.com
- http:
- paths:
- - backend:
- serviceName: nginx-svc
- servicePort: 80
5.测试成功
6.HTTPS证书
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: traefik-web-ui
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: traefik-ui.minikube
- http:
- paths:
- - backend:
- serviceName: traefik-web-ui
- servicePort: 80
- tls:
- - secretName: traefik-ui-tls-cert
官方是怎么导入证书的呢? 注:key和crt必须要有
- openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=traefik-ui.minikube"
- kubectl -n kube-system create secret tls traefik-ui-tls-cert --key=tls.key --cert=tls.crt
7.Basic Authentication
- A. Use htpasswd to create a file containing the username and the MD5-encoded password:
- htpasswd -c ./auth myusername
- You will be prompted for a password which you will have to enter twice. htpasswd will create a file with the following:
- cat auth
- myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
- B. Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd
- kubectl create secret generic mysecret --from-file auth --namespace=monitoring
- Note
- Secret must be in same namespace as the Ingress object.
- C. Attach the following annotations to the Ingress object:
- ingress.kubernetes.io/auth-type: "basic"
- ingress.kubernetes.io/auth-secret: "mysecret"
- They specify basic authentication and reference the Secret mysecret containing the credentials.
- Following is a full Ingress example based on Prometheus:
- #配置文件如下
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: prometheus-dashboard
- namespace: monitoring
- annotations:
- kubernetes.io/ingress.class: traefik
- ingress.kubernetes.io/auth-type: "basic"
- ingress.kubernetes.io/auth-secret: "mysecret"
- spec:
- rules:
- - host: dashboard.prometheus.example.com
- http:
- paths:
- - backend:
- serviceName: prometheus
- servicePort: 9090
模板1 多域名暴漏端口:再看一下 UI 页面,立马更新过来,可以看到刚刚配置的 dashboard.k8s.traefik
和 ela.k8s.traefik
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: dashboard-ela-k8s-traefik
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- spec:
- rules:
- - host: dashboard.k8s.traefik
- http:
- paths:
- - path: /
- backend:
- serviceName: kubernetes-dashboard
- servicePort: 80
- - host: ela.k8s.traefik
- http:
- paths:
- - path: /
- backend:
- serviceName: elasticsearch-logging
- servicePort: 9200
模板2
注意:这里我们根据路径来转发,需要指明 rule 为 PathPrefixStrip,配置为 traefik.frontend.rule.type: PathPrefixStrip
再看一下 UI 页面,也是立马更新过来,可以看到刚刚配置的 my.k8s.traefik/dashboard
和 my.k8s.traefik/kibana
- apiVersion: extensions/v1beta1
- kind: Ingress
- metadata:
- name: my-k8s-traefik
- namespace: kube-system
- annotations:
- kubernetes.io/ingress.class: traefik
- traefik.frontend.rule.type: PathPrefixStrip
- spec:
- rules:
- - host: my.k8s.traefik
- http:
- paths:
- - path: /dashboard
- backend:
- serviceName: kubernetes-dashboard
- servicePort: 80
- - path: /kibana
- backend:
- serviceName: kibana-logging
- servicePort: 5601
8.自动熔断
在集群中,当某一个服务大量出现请求错误,或者请求响应时间过久,或者返回500+错误状态码时,我们希望可以主动剔除该服务,也就是不在将请求转发到该服务上,而这一个过程是自动完成,不需要人工执行。Traefik 通过配置很容易就能帮我们实现,Traefik 可以通过定义策略来主动熔断服务。
NetworkErrorRatio() > 0.5
:监测服务错误率达到50%时,熔断。LatencyAtQuantileMS(50.0) > 50
:监测延时大于50ms时,熔断。ResponseCodeRatio(500, 600, 0, 600) > 0.5
:监测返回状态码为[500-600]在[0-600]区间占比超过50%时,熔断。
案例
- apiVersion: v1
- kind: Service
- metadata:
- name: wensleydale
- annotations:
- traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
- traefik.backend.circuitbreaker: LatencyAtQuantileMS(50.0) > 2000 #>2秒熔断
9.官方文档:
其他多看官方文档
https://docs.traefik.io/user-guide/kubernetes/
10.update
由于业务需求,node会扩充, ds模式多了会浪费资源 20台node+,我们怎么把traefik固定在几台机器上。查了一些文档找到了这个解决方法。
给node打标签,用ds模式启动标签化节点 :https://www.kubernetes.org.cn/daemonset 参考文档。
案例:
给三台node打标签
- kubectl label node k8snode10-146v78-taiji traefik=svc
- kubectl label node k8snode10-146v78-taiji traefik=svc
- kubectl label node k8snode10-146v78-taiji traefik=svc
- ##########取消标签
- kubectl label node k8snode1-174v136-taiji traefik-
-
- 查看标签
- [root@k8s-m1 Traefik]# kubectl get nodes --show-labels
- NAME STATUS ROLES AGE VERSION LABELS
- k8snode1-174v136-taiji Ready node 42d v1.10.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8snode1-174v136-taiji,node-role.kubernetes.io/node=,traefik=svc
- [root@k8s-m1 Traefik]# cat traefik-ds.yaml
- kind: DaemonSet
- apiVersion: extensions/v1beta1
- metadata:
- name: traefik-ingress-controller
- namespace: kube-system
- labels:
- k8s-app: traefik-ingress-lb
- spec:
- template:
- metadata:
- labels:
- k8s-app: traefik-ingress-lb
- name: traefik-ingress-lb
- spec:
- nodeSelector:
- traefik: "svc" #重点2行
- ...................
- 验证
- [root@k8s-m1 Traefik]# kubectl get ds -n kube-system
- NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
- traefik-ingress-controller 3 3 3 3 3 traefik=svc
总结:后期可以根据业务量加标签扩展traefik节点
11.限流限速
官文 : Valid values for extractorfunc
are: * client.ip
* request.host
* request.header.
我们根据2个维度 (request.host | client.ip)
重点首选保障前端HAPROXY,LVS,nginx,CDN传递过来的IP是用户IP,而不是上层负载IP
haproxy可以用配置 实现
- option forwardfor #2者选一个即可
- option forwardfor header Client-IP
ngx可以通过upstream通过 X-Forwarded-For传递IP
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
1. Client.ip验证:我们通过HAPROXY 进行测试,出传递了client-IP到 traefik上进行测试。client.ip限流限速效果满足
- "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:25Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:26Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:29Z"} "BackendName":"Træfik" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T02:03:32Z"} "BackendName":"test.if.org/" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
2.Request.host验证:我们通过HAPROXY 进行测试 ,client.ip限流限速效果满足
- "time":"2019-01-10T03:14:10Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:11Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:13Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:14Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:15Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:21Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:22Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"200 OK" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:23Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
- "time":"2019-01-10T03:14:24Z"} "RequestHost":"test.if.org" "DownstreamStatusLine":"429 Too Many Requests" "request_Client-Ip":"127.0.0.1"
12.金丝雀发布 (A,B)发布
金丝雀发布功能,在K8S-traefik里在v1.7.5时官方进行了修正
- [k8s] Support canary weight for external name service (#4135 by yue9944882)
- "test.if.org/": {
- "servers": {
- "hpa-httpd-5856fd66bf-2qpm6": {
- "url": "http://10.249.221.61:80",
- "weight": 90000
- },
- "hpb-httpd-6bc6f55488-mllq2": {
- "url": "http://10.249.89.29:80",
- "weight": 10000
- }
- },
13.保持会话,session亲和性,sticky特性
原理会话粘粘:在客户端第一次发起请求时,反向代理为客户端分配一个服务端,并且将该服务端的地址以SetCookie的形式发送给客户端,这样客户端下一次访问该反向代理时,便会带着这个cookie,里面包含了上一次反向代理分配给该客户端的服务端信息。这种机制是通过一个名为Sticky的插件实现的。而Traefik则集成了与Nginx的Sticky相同功能,并且可以在Kubernetes中方便的开启和配置该特性。
解决:认证服务器第一次认证到A POD 第2次访问到B POD导致,认证失效问题,保障一致性
service层配置
- metadata:
- annotations:
- traefik.ingress.kubernetes.io/affinity: "true"
验证
web请求头里带了Cookie信息