版本情况
NAME STATUS ROLES AGE VERSION
server101 Ready 6d18h v1.15.1
server88 Ready master 6d19h v1.15.1
Kubernetes里的Pod不是一个“长生”的家伙,它会由于各种原因被销毁和创造。旧的Pod会被销毁,被新的Pod代替。这期间,Pod的IP地址甚至会发生变化。所以Kubernetes引进了Service。Service是一个抽象的实体,Kubernetes在创建Service实体时,为其分配了一个虚拟的IP,当我们需要访问Pod里的容器提供的功能时,我们不直接使用Pod的IP地址和端口,而是访问Service的这个虚拟IP和端口,由Service把请求转发给它背后的Pod。
每个应用需要一个Service来做服务实例的负载均衡,这样服务之间调用就只要找服务对应的Service就好。Service的ip地址由Kubernetes内部DNS负责解析。这样集群内部访问时候就只要写服务名称即可。同时Service的名称和微服务的应用名称一致。
该项目所有的资源统一使用同一命名空间amp
apiVersion: v1
kind: Namespace
metadata:
name: amp
分析各个服务模块之间的关系,所有的服务模块在启动之后需要访问amp-eureka服务,再从amp-config服务中拉取配置文件,所以先要做的是提前在K8s中创建好这两个服务
这里所有的服务都是选择Deployment创建,而不是使用Replication Controller,使用Deployment的好处是在后期服务模块版本升级中,可以快捷方便的进行滚动升级。
vim amp-eureka-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: amp-eureka
labels:
app: eureka
spec:
replicas: 1
selector:
matchLabels:
app: amp-eureka
template:
metadata:
labels:
app: amp-eureka
spec:
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/amp-eureka
server: 192.168.0.88
nodeName: server88
imagePullSecrets:
- name: mysecret
containers:
- name: amp-eureka
image: 192.168.0.88:5000/amp/amp-eureka:v16
ports:
- containerPort: 20000
volumeMounts:
- name: logs
mountPath: /opt/app/logs
Service资源在K8s中有两个作用:
1.提供一个的lable供模块服务之间通讯访问
2.负载均衡作用
供外界访问:type类型为NodePort
[docker@server88 all]$ cat amp-eureka-svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: amp
name: eureka
spec:
type: NodePort
ports:
- port: 20000
nodePort: 20000
selector:
app: amp-eureka
POD之间相互访问:type类型为ClusterIP
[docker@server88 all]$ cat amp-eureka-svc2.yaml
apiVersion: v1
kind: Service
metadata:
name: amp-eureka
namespace: amp
spec:
ports:
- port: 20000
targetPort: 20000
selector:
app: amp-eureka
对应资源的yaml文件创建完成后,执行命令在k8s中根据yaml文件创建对应对外资源
kubectl create -f amp-eureka-deploy.yaml
kubectl create -f amp-eureka-svc.yaml
kubectl create -f amp-eureka-svc2.yaml
这里有个重点是amp-config在与amp-eureka通讯时是通过Service的label amp-eureka通讯的
vim amp-config-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: amp-config
labels:
app: amp-config
spec:
replicas: 1
selector:
matchLabels:
app: amp-config
template:
metadata:
labels:
app: amp-config
spec:
nodeName: server88
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/amp-config
server: 192.168.0.88
nodeName: server88
imagePullSecrets:
- name: mysecret
containers:
- name: amp-config
image: 192.168.0.88:5000/amp/amp-config:v8
ports:
- containerPort: 20140
volumeMounts:
- name: logs
mountPath: /opt/app/logs
env:
- name: localdocker_eureka_com
value: amp-eureka
- name: localdocker_git_com
value: 192.168.0.89
- name: localdocker_rabbitmq_com
value: 192.168.0.88
创建该资源的目的是,在SpringCloud服务中,各个服务在启动后会向amp-config服务拉取配置文件
vim amp-config-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: config-server
namespace: amp
spec:
ports:
- port: 20140
targetPort: 20140
selector:
app: amp-config
执行命令
kubectl create -f amp-config-deploy.yaml
kubectl create -f amp-config-svc.yaml
amp-oss-gateway服务需要被SpringCloud架构最前端的nginx调用,这里要为amp-oss-gateway创建service资源
vim amp-oss-gateway-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: amp-oss-gateway
labels:
app: amp-oss-gateway
spec:
replicas: 1
selector:
matchLabels:
app: amp-oss-gateway
template:
metadata:
labels:
app: amp-oss-gateway
spec:
nodeName: server88
# hostAliases:
# - ip: 192.168.0.101
# hostnames:
# - "server101"
imagePullSecrets:
- name: mysecret
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/amp-oss-gateway
server: 192.168.0.88
containers:
- name: amp-oss-gateway
image: 192.168.0.88:5000/amp/amp-oss-gateway:v7
ports:
- containerPort: 20010
volumeMounts:
- name: logs
mountPath: /opt/app/logs
env:
- name: localdocker_eureka_com
value: amp-eureka
# - name: localdocker_git_com
# value: 192.168.0.89
apiVersion: v1
kind: Service
metadata:
name: oss
namespace: amp
spec:
ports:
- port: 20010
targetPort: 20010
selector:
app: amp-oss-gateway
执行命令
kubectl create -f amp-config-deploy.yaml
kubectl create -f amp-config-svc.yaml
同样的,amp-api-gatewag与amp-oss-gateway一样,同样需要被nginx调用
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: amp-api-gateway
labels:
app: amp-api-gateway
spec:
replicas: 1
selector:
matchLabels:
app: amp-api-gateway
template:
metadata:
labels:
app: amp-api-gateway
spec:
# hostAliases:
# - ip: 192.168.0.101
# hostnames:
# - "server101"
nodeName: server88
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/amp-api-gateway
server: 192.168.0.88
imagePullSecrets:
- name: mysecret
containers:
- name: amp-api-gateway
image: 192.168.0.88:5000/amp/amp-api-gateway:v8
ports:
- containerPort: 20020
# resources:
# limits:
# cpu: "300m"
# memory: "512Mi"
volumeMounts:
- name: logs
mountPath: /opt/app/logs
env:
- name: localdocker_eureka_com
value: amp-eureka
- name: localdocker_git_com
value: 192.168.0.89
- name: localdocker_redis_com
value: 192.168.0.88
- name: localdocker_db_com
value: 192.168.0.88
- name: localdocker_reids_com
value: 192.168.0.88
- name: localdocker_zk_com
value: 192.168.0.88
- name: localdocker_rabbitmq_com
value: 192.168.0.88
apiVersion: v1
kind: Service
metadata:
name: amp-api-gateway
namespace: amp
spec:
ports:
- port: 20020
targetPort: 20020
selector:
app: amp-api-gateway
执行命令
kubectl create -f amp-api-gateway-deploy.yaml
kubectl create -f amp-api-gateway-deploy.yaml
SpringCloud剩下所有的服务模块都与amp-user-server类似,所有的yaml文件都已经分类编写完毕。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: amp-user-server
labels:
app: amp-user-server
spec:
replicas: 1
selector:
matchLabels:
app: amp-user-server
template:
metadata:
labels:
app: amp-user-server
spec:
nodeName: server88
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/amp-user-server
server: 192.168.0.88
# hostAliases:
# - ip: 192.168.0.101
# hostnames:
# - "server101"
imagePullSecrets:
- name: mysecret
containers:
- name: amp-user-server
image: 192.168.0.88:5000/amp/amp-user-server:v29
ports:
- containerPort: 20120
# resources:
# limits:
# cpu: "400m"
# memory: "512Mi"
volumeMounts:
- name: logs
mountPath: /opt/app/logs
env:
- name: localdocker_eureka_com
value: amp-eureka
- name: localdocker_git_com
value: 192.168.0.89
- name: localdocker_redis_com
value: 192.168.0.88
- name: localdocker_db_com
value: 192.168.0.88
- name: localdocker_reids_com
value: 192.168.0.88
- name: localdocker_zk_com
value: 192.168.0.88
- name: lcaldocker_rabbitmq_com
value: 192.168.0.88
vim amp-user-server-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: amp-user-server
namespace: amp
spec:
ports:
- port: 20120
targetPort: 20120
selector:
app: amp-user-server
执行命令创建资源
kubectl create -f amp-user-server-deploy.yaml
kubectl create -f amp-user-server-svc.yaml
**所有的资源yaml文件都已编写完毕,存放位置server88服务/home/docker/k8s_yaml/amp/all下,后续的所有的服务模块可以像例如amp-user-server启动.
当所有的服务模块容器化启动完毕后,最后需要在完成Nginx的部署。在原始的SpringCloud架构中,将Nginx服务迁移到K8s服务,需要注意三点:
1.nginx的配置文件传入容器中
2.站点目录挂载到容器中
3.nginx配置文件中与SpringCloud的通讯连接
[docker@server88 nginx]$ vim nginx-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginxconfig
namespace: amp
data:
nginx.conf: |
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name manager.hexinjingu.com;
root /usr/share/nginx/html;
index index.html;
location /userService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /menuService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /ssoService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /logService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /categoryService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /apiService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /agencyworkService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /fileService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /messageService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /formService{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-oss-gateway:20010;
}
location /api/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://amp-api-gateway:20020;
}
}
}
[docker@server88 nginx]$ vim nginx-deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: amp
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeName: server88
imagePullSecrets:
- name: mysecret
containers:
- name: eureka
image: 192.168.0.88:5000/amp/nginx:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
env:
- name: amp_oss_gateway
value: oss
- name: amp_api_gateway
value: api
volumeMounts:
- name: www-volume
mountPath: /usr/share/nginx/html
- name: config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: logs
mountPath: /var/log/nginx
volumes:
- name: logs
nfs:
path: /home/docker/k8s_yaml/nfs_logs/nginx
server: 192.168.0.88
- name: www-volume
hostPath:
path: /home/docker/www
- name: config-volume
configMap:
name: nginxconfig
items:
- key: nginx.conf
path: nginx.conf
# hostPath:
# path: /home/docker/config/nginx.conf
对外通过8090端口访问nginx
[docker@server88 nginx]$ vim nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
namespace: amp
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 8090
selector:
app: nginx
启动nginx
kubectl create -f nginx-configmap.yaml
kubectl create -f nginx-svc.yaml
kubectl create -f nginx-deploy.yaml
由于服务模块的日志都是产生在容器中,当我们需要看例如amp-user-server服务的日志时,可能需要通过复杂的命令进入容器中,才能查看。而且这种存放在容器的中日志是不稳定的,当容器死亡重生时,对应的日志文件也会被删除。
kubectl get all -o wide -n amp
kubectl exec -it amp-user-server-5b5c89f96f-cwgjc sh -n amp
taif -f ./logs/o2o.log
现在,可以通过NFS服务将所有容器的日志统一存放在NFS的服务端。
NFS就是Network File System的缩写,它最大的功能就是可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。
server88服务端,安装2个包nfs-utils和rpcbind
yum install -y nfs-utils rpcbind
Installed:
nfs-utils.x86_64 1:1.3.0-0.54.el7 rpcbind.x86_64 0:0.2.0-44.el7
server101服务器,安装1个包nfs-utils
[root@zyshanlinux-02 ~]# yum install -y nfs-utils
Installed:
nfs-utils.x86_64 1:1.3.0-0.54.el7
在server88服务端编写配置文件
vim /etc/exports
#设置/home/dcoker/k8s_yaml/nfs_logs为共享目录允许192.168.0.0.0/24网段的主机挂载
/home/docker/k8s_yaml/nfs_logs 192.168.0.0/24(rw,sync,no_root_squash,no_all_squash)
rw:可读写的权限
ro:只读的权限
no_root_squash:登入到NFS主机的用户如果是root,该用户即拥有root权限
root_squash:登入NFS主机的用户如果是root,该用户权限将被限定为匿名使用者nobody
all_squash:不管登陆NFS主机的用户是何权限都会被重新设定为匿名使用者nobody
anonuid:将登入NFS主机的用户都设定成指定的user id,此ID必须存在于/etc/passwd中
sync:资料同步写入存储器中。
async:资料会先暂时存放在内存中,不会直接写入硬盘。
执行命令重新启动服务
systemctl restart nfs-utils.service
在server101服务器上执行
[docker@server101 ~]$ showmount -e 192.168.0.88
Export list for 192.168.0.88:
/home/docker/k8s_yaml/nfs_logs 192.168.0.0/24
远程挂载测试
sudo mount -t nfs 192.168.0.88:/home/docker/k8s_yaml/nfs_logs /mnt
将SpringCloud各个模块的日志在yaml文件中挂载的各自的目录中
[docker@server88 ~]$ tree k8s_yaml/nfs_logs/
k8s_yaml/nfs_logs/
├── amp-agencywork-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04995072492755950.tmp
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-api-gateway
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756695887876491.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-category-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04995071320332154.tmp
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-config
│?? ├── config_err.log
│?? ├── config.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-email-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756675806991003.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-email-task
│?? ├── err.log
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-eureka
│?? ├── err.log
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-file-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756710480423581.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-form-server
│?? ├── err.log
│?? └── o2o.log
├── amp-log-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04995073076784769.tmp
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-log-task
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756711636710138.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-menu-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756709173102481.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-message-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04995072660893114.tmp
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-oss-gateway
│?? ├── err.log
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-sso-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.04756708962097272.tmp
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-sync-data-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
├── amp-user-server
│?? ├── err.log
│?? ├── err.log.2020-02-23.0.gz
│?? ├── o2o.log
│?? └── o2o.log.2020-02-23.0.gz
└── nginx
├── access.log
└── error.log
当启动amp-eureka,amp-config服务后,再启动其他服务例如amp-user-server服务,无法向amp-eureka服务注册,查看日志发现,amp-user-server无法向amp-config拉取配置,原因是其与amp-config之间的通讯是通过amp-config的主机名通讯,而amp-config的主机名是容器再启动时随机分配的一个名称,根本无法完成通讯。
采用uri的方式访问config服务,Kubernetes会自动根据DNS映射config-server到config服务所对应的Kubernetes Service,
此处的config-server必须和Kubernetes Service中的metename一致:
当Pod资源分配到server87服务器上时,会造成server87内存不足,负载升高,暂时将server87服务器从集群中剔除,保留server88与server101服务器。
整个集群中由于maser节点server88服务器的性能最好,创建pod时却无法分配到server88服务器上。这是由于出于安全考虑,默认配置下Kubernetes不会将Pod调度到Master节点。
如果希望将k8s-master也当作Node使用,可以执行如下命令:
kubectl taint node node-role.kubernetes.io/master-
当通过nginx的前端页面访问时,例如amp-user-server需要调用amp-menu-server,但此时amp-user-server是通过amp-menu-server向amp-config的注册中心注册的主机名访问的,这个容器的主机名是无法被解析出容器的IP。这种情况就会导致各个模块相互联系时,都是通过amp-config中收集的注册主机名通讯,所有的模块相互通讯都会出现访问不同问题的。
修改各服务模块向注册中心注册的主机名,添加如下配置
这样我们就可以创建对应的svc资源,让服务模块在容器内部内部通过coredns解析,完成通讯
查看资源
#查看svc资源
kubectl get svc
#显示详细信息 -o wide
kubectl get svc -o wide
#查看指定命名空间下的资源
kubectl get svc -o wide -n amp
#查看pod资源
kubectl get pod -o wide -n amp
#查看deployment资源
kubectl get deployment -o wide -n amp
#查看节点
kubectl get node
查看日志
#docker查看容器日志
docker logs -f b7f11b5b3a56
#k8s查看pod日志(加上命名空间)
kubectl log -f b7f11b5b3a56 -n kube-system
查看pod故障信息
kubectl describe pod amp-agencywork-server-77494986f6-t4xd4 -n amp
进入容器
#docker
docker exec -it b7f11b5b3a56 /bin/sh
#k8s
kubectl exec -it amp-agencywork-server-77494986f6-t4xd4 /bin/sh -n amp
启动或删除资源
#启动
kubectl create -f /home/docker/k8s_yaml/amp/all/amp-eureka-deploy.yaml
#停止
kubectl delete -f /home/docker/k8s_yaml/amp/all/amp-eureka-deploy.yaml
在线编辑yaml文件
#kubectl edit 资源类型 + 名称 +命令空间
kubectl edit deployment +名称 -n amp
增加减少副本
kubectl scale deploy/amp-api-gateway --replicas=10 -n amp
#或者在线编辑
kubectl edit deployment +名称 -n amp
记录版本信息
kubectl set image deployment/nginx nginx=nginx:1.16 --record
查看升级版本
kubectl rollout history deploy/nginx
回滚操作
kubectl rollout undo deploy/nginx
e -n amp
#查看节点
kubectl get node
查看日志
```sh
#docker查看容器日志
docker logs -f b7f11b5b3a56
#k8s查看pod日志(加上命名空间)
kubectl log -f b7f11b5b3a56 -n kube-system
查看pod故障信息
kubectl describe pod amp-agencywork-server-77494986f6-t4xd4 -n amp
进入容器
#docker
docker exec -it b7f11b5b3a56 /bin/sh
#k8s
kubectl exec -it amp-agencywork-server-77494986f6-t4xd4 /bin/sh -n amp
启动或删除资源
#启动
kubectl create -f /home/docker/k8s_yaml/amp/all/amp-eureka-deploy.yaml
#停止
kubectl delete -f /home/docker/k8s_yaml/amp/all/amp-eureka-deploy.yaml
在线编辑yaml文件
#kubectl edit 资源类型 + 名称 +命令空间
kubectl edit deployment +名称 -n amp
增加减少副本
kubectl scale deploy/amp-api-gateway --replicas=10 -n amp
#或者在线编辑
kubectl edit deployment +名称 -n amp
记录版本信息
kubectl set image deployment/nginx nginx=nginx:1.16 --record
查看升级版本
kubectl rollout history deploy/nginx
回滚操作
kubectl rollout undo deploy/nginx