我们平时在使用k8s的时候会面临到版本升级的问题,我个人开发使用的基本都是java语言,所以也一定会遇到一个问题代码启动缓慢.
更新Deployment版本的时候java的延迟启动会导致请求失败,为了解决这种情况可以使用金丝雀发布.
使用连个Deployment代表两个版本,两个Deployment有相同的Label,在新版本的Deployment完全启动之后,下掉就得Deployment
但是这种方法操作上是相对复杂的,而且违背了k8s的更新规则
为了解决上述的情况Deployment提供了就绪探测器可以知道容器什么时候准备好了并可以开始接受请求流量,当一个Pod中所有的容器都准备好了,才能吧这个Pod看做就绪.
关于探测器的内容这里就不多做赘述了,可以直接通过文末的跳转连接到官网的教程
这里我吧自己测试的流程展示出来方便大家理解
准备的服务:
逻辑很简单一个GET请求,/hello没有参数,请求后返回 Hello, Spring! 并打印日志
再启动的时候
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 20; i++) {
System.out.println(i + " seconds late run");
Thread.sleep(1000);
}
SpringApplication.run(DemoApplication.class, args);
}
public Mono<ServerResponse> hello(ServerRequest request) {
log.info("hello!!!");
return ServerResponse.ok()
.contentType(MediaType.TEXT_PLAIN)
.body(BodyInserters.fromObject("Hello, Spring!"));
}
@Bean
public RouterFunction<ServerResponse> route(GreetingHandler greetingHandler) {
return RouterFunctions.route(
RequestPredicates.GET("/hello").and(RequestPredicates.accept(MediaType.TEXT_PLAIN)),
greetingHandler::hello);
}
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-demo-server
namespace: paas
spec:
replicas: 3
selector:
matchLabels:
run: client-demo-server
template:
metadata:
labels:
run: client-demo-server
spec:
containers:
- name: client-demo-server-containers
image: registry.cn-beijing.aliyuncs.com/spring-cloud-client-demo-image:0.0.2
# 终点是下面两个配置,指定了请求的端口以及路径,并且设置了超时时间
readinessProbe:
httpGet:
port: 8083
path: /hello
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 100
# 探活 如果请求返回异常则重启pod
livenessProbe:
httpGet:
port: 8083
path: /hello
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 20
volumeMounts:
- name: host-time
mountPath: /etc/localtime
ports:
- containerPort: 8083
resources:
requests:
cpu: 1
memory: 1024Mi
limits:
cpu: 1
memory: 1024Mi
imagePullSecrets:
- name: paas
volumes:
- name: host-time
hostPath:
path: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:
name: client-demo-server
namespace: paas
labels:
run: client-demo-server
spec:
type: NodePort
ports:
- port: 8083
selector:
run: client-demo-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: client-demo-server
namespace: paas
spec:
rules:
- host: client-demo-server.jbzm.internal.com
http:
paths:
- backend:
serviceName: client-demo-server
servicePort: 8083
再次声明为了增大服务启动时间在启动之前等待了20s
kubectl get pod -n paas
NAME READY STATUS RESTARTS AGE
client-demo-server-57c468986d-2r9lh 1/1 Running 0 14m
client-demo-server-57c468986d-k8xxw 1/1 Running 0 14m
client-demo-server-57c468986d-wmcxf 1/1 Running 0 14m
client-demo-server-767868f74c-gx6df 0/1 Running 0 49s
这里先将replicas设置为3,然后更新deployment版本
可以看到没服务被下掉,但是有一个新的client-demo-server-767868f74c-gx6dfpod出现并且处于Running状态,但是没有ready
这里deployment滚动更新与调度的原理可以去官方文档查看有很详细的介绍
查看新扩出来的pod的日志
kubectl logs -f --tail=100 -n paas client-demo-server-767868f74c-gx6df
0 seconds late run
1 seconds late run
2 seconds late run
3 seconds late run
4 seconds late run
5 seconds late run
6 seconds late run
7 seconds late run
可以看到服务正启动中,等到倒计时结束并且完成启动,pod会调用服务的/hello接口,并且在返回状态码为200时认为服务已经成功的启动了
kubectl get pod -n paas
NAME READY STATUS RESTARTS AGE
client-demo-server-57c468986d-2r9lh 1/1 Running 0 15m
client-demo-server-57c468986d-k8xxw 0/1 Terminating 0 15m
client-demo-server-57c468986d-wmcxf 1/1 Running 0 15m
client-demo-server-767868f74c-gx6df 1/1 Running 0 74s
client-demo-server-767868f74c-lzwxw 0/1 ContainerCreating 0 17s
这是可以看到已经开始下掉老的pod了
到这里基本就算是演示结束了