使用过时的ReplicationController实现自动的滚动升级

1. 构建不同镜像

[root@xxx ~]# cat Dockerfile
FROM nginx
RUN echo version:v1,hostanem:$HOSTNAME > /usr/share/nginx/html/index.html  # 分别以v1、v2区别版本
RUN cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime  # 修改时区
ENTRYPOINT ["nginx", "-g", "daemon off;"]
[root@xxx ~]# docker build -t version:v1 .

2. 发布v1版本App

---
apiVersion: v1
kind: ReplicationController
metadata:
  name: version-v1
spec:
  replicas: 3
  template:
    metadata:
      name: version
      labels:
        nginx: version
    spec:
      nodeSelector: 
        kubernetes.io/hostname: "k8s-node1.novalocal" 
      containers:
      - image: version:v1
        name: version
        imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
  name: version
spec:
  type: NodePort
  selector:
    app: version
  ports:
  - name: http
    port: 80
    targetPort: 8080

3. 另开终端,持久观察

[root@xxx ~]# kubectl get svc #获取外部端口
[root@xxx ~]# while true;do curl 任一节点:外部端口;sleep 0.5;done
version:v1,hostname:version-v1-2w9pz
version:v1,hostname:version-v1-xs4z1
...

4. 版本更新
使用kubectl来执行滚动升级

[root@xxx ~]# kubectl rolling-update version-v1 version-v2 --image=version:v2
Command "rolling-update" is deprecated, use "rollout" instead
Created version-v2
Scaling up version-v2 from 0 to 3, scaling down version-v1 from 3 to 0 (keep 3 pods available, don't exceed 4 pods)
Scaling version-v2 up to 1
Scaling version-v1 down to 2
Scaling version-v2 up to 2
Scaling version-v1 down to 1
Scaling version-v2 up to 3
Scaling version-v1 down to 0
Update succeeded. Deleting version-v1
replicationcontroller/version-v2 rolling updated to "version-v2"

升级的过程也就是新建新的RC,并通过等量的旧RC的缩容,新RC的扩容,加上 --v 6可查看命令执行的日志
5. 回头看第三步所在终端,版本迭代更新但服务不止,直至全部为新版本
6. 何谓之过时

1.升级过程有kubeclt客户端执行伸缩请求,而不是通过master调度。
2.kubectl升级需要保持终端的健康性,退出或网络中断将使pod/RC处于中间态 。
3.k8s初衷是通过不断收敛达到期望的状态,需要升级新镜像只需声明新镜像即可,Deployment资源出现。

你可能感兴趣的:(Kubernetes)