1. 部署策略对比
分别对滚动部署、蓝绿部署和金丝雀部署进行对比
滚动部署: 应用的新版本逐步替换旧版本。实际的部署发生在一段时间内。在此期间,新旧版本会共存,而不会影响功能和用户体验。这个过程可以更轻易的回滚和旧组件不兼容的任何新组件。
蓝绿部署:应用的新版本部署在绿色版本环境中,进行功能和性能测试。一旦测试通过,应用的流量从蓝色版本路由到绿色版本。然后绿色版本变成新的生产环境。在这个方法中,两个相同的生产环境并行工作。
金丝雀部署:采用金丝雀部署,你可以在生产环境的基础设施中小范围的部署新的应用代码。一旦应用签署发布,只有少数用户被路由到它。最大限度的降低影响。如果没有错误发生,新版本可以逐渐推广到整个基础设施。以前旷工开矿下矿洞前,先会放一只金丝雀进去探是否有有毒气体,看金丝雀能否活下来,金丝雀发布由此得名。下图示范了金丝雀部署:
金丝雀部署包括将生产流量从版本A逐渐转移到版本B。通常,流量是根据权重分配的。 例如,90%的请求发送到版本A,10%的请求发送到版本B。
2. 使用Kubernetes实现金丝雀部署
主要步骤:
1.部署v1版本的应用,此时service访问的都是v1版本的服务
2.部署v2版本,数量为x/10,同时缩小v1版本的数量x/10,此时有x/10的流量到v2版本的服务
3.逐步缩小v1,扩大v2,最终v2版本替换全部的v1
2.1 搭建模拟的服务
app-v1.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/canary/native/app-v1.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 10
selector:
matchLabels:
app: my-app
version: v1.0.0
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app-v1 created
$kubectl get service my-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-app NodePort 10.98.198.198 80:30969/TCP 23m
$curl 10.98.198.198:80
Host: my-app-v1-c9b7f9985-5qvz4, Version: v1.0.0
2.2 应用使用金丝雀部署方式来升级
接下来,我们对my-app-v1升级到my-app-v2:
app-v2.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/canary/native/app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: v2.0.0
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
开启watch来监控pod和deployment的变化
$kubectl get --watch deployment
$kubectl get --watch pod
升级
$kubectl apply -f app-v2.yaml
deployment.apps/my-app-v2 created
此时可以看到,my-app-v2启动了1个
$kubectl get --watch deployment
NAME READY UP-TO-DATE AVAILABLE AGE
my-app-v1 10/10 10 10 45m
my-app-v2 1/1 1 1 46s
$kubectl scale --replicas=9 deploy my-app-v1
deployment.apps/my-app-v1 scaled
$kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
my-app-v1 9/9 9 9 47m
my-app-v2 1/1 1 1 2m48s
此时,我们将my-app-v1 缩小到9个,这样通过service的负载均衡,my-app-v2会承接到%10(1/20)的流量
$service=10.98.198.198:80
$while sleep 0.1; do curl "$service"; done
Host: my-app-v1-c9b7f9985-mqnmr, Version: v1.0.0
Host: my-app-v1-c9b7f9985-bl4g7, Version: v1.0.0
Host: my-app-v1-c9b7f9985-rmng9, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-bl4g7, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mm6fp, Version: v1.0.0
Host: my-app-v2-77fc8c9499-m6n9j, Version: v2.0.0
Host: my-app-v1-c9b7f9985-l69pf, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mqnmr, Version: v1.0.0
Host: my-app-v1-c9b7f9985-mz9hc, Version: v1.0.0
Host: my-app-v1-c9b7f9985-62zb4, Version: v1.0.0
验证通过后,我们逐步将my-app-v2扩容到10个,将my-app-v1缩小到0个
$kubectl scale --replicas=10 deploy my-app-v2
$kubectl delete deploy my-app-v1
再次验证服务,会发现my-app-v2承接了所有流量:
$while sleep 0.1; do curl "$service"; done
测试完成清理数据
$kubectl delete all -l app=my-app
3. 使用Kubernetes实现蓝绿部署
主要步骤:
1.部署v1版本 ,此时service访问的都是v1版本的服务
2.部署v2版本,直到部署完成
3.将service的流量从v1版本切换到v2版本
4.销毁v1
首先,通过如下命令监控pod的实时状态
$watch kubectl get pod
3.1 搭建模拟的服务
app-v1.yaml:https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/blue-green/single-service/app-v1.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
# Note here that we match both the app and the version
selector:
app: my-app
version: v1.0.0
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v1.0.0
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署服务和v1版本
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app-v1 created
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 14d
my-app NodePort 10.111.231.242 80:31540/TCP 18s
$while sleep 0.1;do curl 10.111.231.242:80;done
Host: my-app-v1-c9b7f9985-wqpf5, Version: v1.0.0
Host: my-app-v1-c9b7f9985-wqpf5, Version: v1.0.0
Host: my-app-v1-c9b7f9985-gnhr4, Version: v1.0.0
3.2 部署v2版本
app-v2.yaml:https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/blue-green/single-service/app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v2.0.0
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署完成后,我们可以看到,总共个部署了两个版本的deployment。有3个pod是v1,另外3个是v2的。而当前service访问的都是v1版本的服务
接下来,就是要将服务的流量切到v2
$kubectl patch service my-app -p '{"spec":{"selector":{"version":"v2.0.0"}}}'
此时可以看到,服务的流量都到了v2
验证没问题后,我们把v1删除
$kubectl delete deploy my-app-v1
若有问题,可以回滚
$kubectl patch service my-app -p '{"spec":{"selector":{"version":"v1.0.0"}}}'
4 使用Kubernetes实现滚动部署
主要步骤:
1.部署v1版本 ,此时service访问的都是v1版本的服务
2.设置v2版本,且更新策略为滚动更新
3.部署v2版本
4.1 搭建模拟的服务
app-v1.yaml: https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/ramped/app-v1.yaml
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 10
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v1.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v1.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
periodSeconds: 5
部署app-v1.yaml
$kubectl apply -f app-v1.yaml
service/my-app created
deployment.apps/my-app created
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 14d
my-app NodePort 10.100.43.22 80:32725/TCP 45s
$curl 10.100.43.22:80
Host: my-app-c9b7f9985-ph2fz, Version: v1.0.0
4.2 接下来准备进行滚动升级
通过如下命令监控pod的变化
$watch kubectl get pod
app-v2.yaml : https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/ramped/app-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 10
# Here we define the rolling update strategy
# - maxSurge define how many pod we can add at a time
# - maxUnavailable define how many pod can be unavailable
# during the rolling update
#
# Setting maxUnavailable to 0 would make sure we have the appropriate
# capacity during the rolling update.
# You can also use percentage based value instead of integer.
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# The selector field tell the deployment which pod to update with
# the new version. This field is optional, but if you have labels
# uniquely defined for the pod, in this case the "version" label,
# then we need to redefine the matchLabels and eliminate the version
# field from there.
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v2.0.0
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9101"
spec:
containers:
- name: my-app
image: containersol/k8s-deployment-strategies
ports:
- name: http
containerPort: 8080
- name: probe
containerPort: 8086
env:
- name: VERSION
value: v2.0.0
livenessProbe:
httpGet:
path: /live
port: probe
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: probe
# Intial delay is set to a high value to have a better
# visibility of the ramped deployment
initialDelaySeconds: 15
periodSeconds: 5
开始升级
$kubectl apply -f app-v2.yaml
deployment.apps/my-app configured
同时可以看到pod正在被逐步替换
在逐步替换的过程中,为了验证流量,可以随时暂停升级,暂停恢复命令如下
$kubectl rollout pause deploy my-app
deployment.apps/my-app paused
$kubectl rollout resume deploy my-app
deployment.apps/my-app resumed