前面讲了一篇使用nginx流量切分的文章做A/B测试,今天讲一下怎么简单使用nginx实现基于http header、cookie、arg实现简单的灰度发布。
总的来说实现非常简单,只是在nginx配置里的location块里加入相应的配置即可。假设有个应用的两个版本,分别为old和new,如下面所示:
使用$http_ 获取http请求的header,根据配置中是否为完整或者正则匹配,匹配foo的值
location / {
....
if ($http_foo = "bar") { //完全匹配
#if ($http_foo ~ "bar") { //正则匹配
proxy_pass http://default-new-nginx-80;
break;
}
proxy_pass http://default-old-nginx-80;
....
}
使用$cookie_获取http请求的cookie,根据配置中是否为完整或者正则匹配,匹配foo的值
location / {
....
if ($cookie_foo ~ "^bar") { //正则匹配
#if ($cookie_foo = "bar") { //完全匹配
proxy_pass http://default-new-nginx-80;
break;
}
proxy_pass http://default-old-nginx-80;
...
使用$arg_获取http的请求参数,根据配置中是否为完整或者正则匹配,匹配foo的值
location / {
....
if ($arg_foo ~ "^bar") { //正则匹配
#if ($arg_foo = "bar") { //完全匹配
proxy_pass http://default-new-nginx-80;
break;
}
proxy_pass http://default-old-nginx-80;
...
在nginx中配置很简单,但是我的集群环境是k8s,我使用的是nginx-ingress-controller暴露内网应用,因此我考虑的是怎么将这个配置以ingress的形式发布到k8s集群中,nginx-ingress-controller能够感知并将配置正确的写入nginx并reload。
假设有old版本的nginx,以域名形式访问old版本会返回old内容,访问new版本会返回new内容,deployment和service yaml文件如下
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: old-nginx
spec:
replicas: 2
selector:
matchLabels:
run: old-nginx
template:
metadata:
labels:
run: old-nginx
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/xianlu/old-nginx
imagePullPolicy: Always
name: old-nginx
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: old-nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: old-nginx
sessionAffinity: None
type: NodePort
再生成新的版本的应用,即我们准备灰度的版本
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: new-nginx
spec:
replicas: 1
selector:
matchLabels:
run: new-nginx
template:
metadata:
labels:
run: new-nginx
spec:
containers:
- image: registry.cn-hangzhou.aliyuncs.com/xianlu/new-nginx
imagePullPolicy: Always
name: new-nginx
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: new-nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: new-nginx
sessionAffinity: None
type: NodePort
应用部署完毕后,我们看要怎么配置ingress才能够正确的刷新到ngixn.conf,在nginx-ingress-controller中,它支持 nginx.ingress.kubernetes.io/configuration-snippet 这样一个注解允许我们在location里配置自定义需求。同时结合nginx-ingress-controller的upstream命名方式namespace-serviceName-port的方式配置注解信息。如下ingress yaml所示
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_foo ~ "^.*bar$") {
proxy_pass http://default-new-nginx-80;
break;
}
name: gray-release
namespace: default
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: old-nginx
servicePort: 80
path: /
- backend:
serviceName: new-nginx
servicePort: 80
path: /
生成之后,我们去nginx-ingress-controller镜像里查看nginx.conf,会发现和我们在上述nginx的配置一样,这里我们请求访问下看是否和配置那样符合我们的逾期。
curl -H "Host: www.example.com" http://172.16.0.9
old
curl -H "Host: www.example.com" http://172.16.0.9
old
curl -H "Host: www.example.com" http://172.16.0.9
old
curl -H "Host: www.example.com" -H "foo: bar" http://172.16.0.9
new
curl -H "Host: www.example.com" -H "foo: bar" http://172.16.0.9
new
curl -H "Host: www.example.com" -H "foo: bar" http://172.16.0.9
new
同理如果我们要想设置cookie,arg都可以以header的类似方式配置,nginx-ingress-controller会帮我们把配置刷新到nginx以完成我们的灰度需求。