大量内容参考了https://k8smeetup.github.io
还有部分没有整理完成
基于1.11
下面表格中的对象都可以在yaml文件中作为一种API类型来配置:
类别 | 名称 |
---|---|
资源对象 | Pod、ReplicaSet、ReplicationController、Deployment、StatefulSet、DaemonSet、Job、CronJob、HorizontalPodAutoscaling、Node、Namespace、Service、Ingress、Label、CustomResourceDefinition |
存储对象 | Volume、PersistentVolume、Secret、ConfigMap |
策略对象 | SecurityContext、ResourceQuota、LimitRange |
身份对象 | ServiceAccount、Role、ClusterRole |
部署表示用户对Kubernetes集群的一次更新操作。部署是一个比RS应用模式更广的API对象,可以是创建一个新的服务,更新一个新的服务,也可以是滚动升级一个服务。滚动升级一个服务,实际是创建一个新的RS,然后逐渐将新RS中副本数增加到理想状态,将旧RS中的副本数减小到0的复合操作;这样一个复合操作用一个RS是不太好描述的,所以用一个更通用的Deployment来描述。以Kubernetes的发展方向,未来对所有长期伺服型的的业务的管理,都会通过Deployment来管理。
RC、RS和Deployment只是保证了支撑服务的微服务Pod的数量,但是没有解决如何访问这些服务的问题。一个Pod只是一个运行服务的实例,随时可能在一个节点上停止,在另一个节点以一个新的IP启动一个新的Pod,因此不能以确定的IP和端口号提供服务。要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作,是针对客户端访问的服务,找到对应的的后端服务实例。在K8集群中,客户端需要访问的服务就是Service对象。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。在Kubernetes集群中微服务的负载均衡是由Kube-proxy实现的。Kube-proxy是Kubernetes集群内部的负载均衡器。它是一个分布式代理服务器,在Kubernetes的每个节点上都有一个;这一设计体现了它的伸缩性优势,需要访问服务的节点越多,提供负载均衡能力的Kube-proxy就越多,高可用节点也随之增多。与之相比,我们平时在服务器端做个反向代理做负载均衡,还要进一步解决反向代理的负载均衡和高可用问题。
典型的后台支撑型服务包括,存储,日志和监控等在每个节点上支持Kubernetes集群运行的服务。
1.5版本中成为beta版,1.9版本正式成为GA版
略
minion
# 禁止pod调度到某个节点
kubectl cordon NODENAME
# 驱逐某个节点上的所有pod
kubectl drain NODENAME
后期再研究吧
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[root@server apis]# kubectl create -f nginx.yaml
deployment.apps/nginx-deployment created
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-deployment-884c7fc54-mh2b4 1/1 Running 0 23s 172.17.1.10 client02 <none>
nginx-deployment-884c7fc54-qz6cg 1/1 Running 0 22s 172.17.1.9 client02 <none>
nginx-deployment-884c7fc54-tztmt 1/1 Running 0 22s 172.17.2.9 client01 <none>
for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; exit 1
curl -X POST http://$MANAGEMENT_SERVICE_HOST:$MANAGEMENT_SERVICE_PORT/register -d 'instance=$()&ip=$()'
myapp.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
services.yaml
kind: Service
apiVersion: v1
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
---
kind: Service
apiVersion: v1
metadata:
name: mydb
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9377
kubectl create -f myapp.yaml
kubectl get -f myapp.yaml
kubectl describe -f myapp.yaml
kubectl create -f services.yaml
kubectl get -f myapp.yaml
查看相关的结果
####测试例子:
# 使用docker启动一个pause容器
docker run -d --name pause -p 8880:80 jimmysong/pause-amd64:3.0
# 使用pause容器的网络、ipc、pid创建一个nginx容器
$ cat <<EOF >> nginx.conff
error_log stderr;
events { worker_connections 1024; }
http {
access_log /dev/stdout combined;
server {
listen 80 default_server;
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:2368;
}
}
}
EOF
docker run -d --name nginx -v `pwd`/nginx.conf:/etc/nginx/nginx.conf --net=container:pause --ipc=container:pause --pid=container:pause nginx
# 创建一个ghost的容器
docker run -d --name ghost --net=container:pause --ipc=container:pause --pid=container:pause ghost
注意:
参考:https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
Pod的状态status信息保存在PodStatus的phase字段内:
PodConditions
容器的探针是由 kubelet 对容器执行的定期诊断。要执行诊断,kubelet 调用由容器实现的Handler。有三种类型的处理程序:
每次探测都将获得以下三种结果之一:
Kubelet 可以选择是否执行在容器上运行的两种探针执行和做出反应:
问题:
就绪探针和SOA框架中的dubbo-admin中的策略是否有冲突呢?还是相辅相成???上线之前一定要测试。
livenessProbe对象:
官网的例子:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- args:
- /server
image: k8s.gcr.io/liveness
livenessProbe:
httpGet:
# when "host" is not defined, "PodIP" will be used
# host: my-host
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
# scheme: HTTPS
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15
timeoutSeconds: 1
name: liveness
注意多容器的Pod的状态
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello > /usr/share/message"]
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
preStop:
exec:
command: ["/bin/sleep","20"]
注意:
[root@server ~]# kubectl create -f lifecycle-events.yaml
[root@server ~]# kubectl get pod/lifecycle-demo
NAME READY STATUS RESTARTS AGE
lifecycle-demo 1/1 Running 0 15s
[root@server ~]# kubectl describe pod/lifecycle-demo
......
Node: client02/10.40.2.229
......
Status: Running
IP: 172.17.1.7
......
[root@server apis]# time kubectl delete -f lifecycle-events.yaml
pod "lifecycle-demo" deleted
real 0m21.562s
user 0m0.205s
sys 0m0.046s
# 在pod执行sleep期间,执行curl命令,结果是正常的
[root@client02 k8s-v1.11.5]# curl http://172.17.1.7
<!DOCTYPE html>
......
注意:
(1)当设置多个preStop时报错:
preStop:
exec:
command: ["/bin/sleep","20"]
httpGet:
port: http
path: /index.html
The Pod "lifecycle-demo" is invalid: spec.containers[0].lifecycle.preStop.httpGet: Forbidden: may not specify more than 1 handler type
(2)多个相同的preStop会覆盖
preStop:
exec:
command: ["/bin/sleep","20"]
command: ["/bin/sleep","5"] #会将sleep 20覆盖掉
(3)当preStop的命令出错时会直接忽略
比如:
preStop:
exec:
command: ["/bin/sleep 20 && /bin/sleep 5"]
或者
command: ["/bin/sleep", "20", "&&", "/bin/sleep", "5"]
使用场景:
配置参考:https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
非自愿性中断,出现不可避免的硬件或系统软件错误例如:
自愿性中断:应用程序所有者发起的操作和由集群管理员发起的操作:
apiVersion: v1
kind: Pod
metadata:
name: nginx-p
spec:
containers:
- name: nginx
image: nginx
- name: busybox
image: busybox
command: ['sh','-c','sleep 36000']
注意:
kubectl exec -it -p nginx-p -c busybox /bin/sh
进入指定容器/ # netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
/ # telnet localhost 80
Connection closed by foreign host
$ kubectl get pods -l environment=production,tier=frontend
$ kubectl get pods -l 'environment in (production),tier in (frontend)'
在service、replicationcontroller等object中有对pod的label selector,使用方法只能使用等于操作,例如:
selector:
component: redis
在Job、Deployment、ReplicaSet和DaemonSet这些object中,支持set-based的过滤,例如:
selector:
matchLabels:
component: redis
matchExpressions:
- {key: tier, operator: In, values: [cache]}
- {key: environment, operator: NotIn, values: [dev]}
如Service通过label selector将同一类型的pod作为一个服务expose出来。
另外在node affinity和pod affinity中的label selector的语法又有些许不同,示例如下:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/e2e-az-name
operator: In
values:
- e2e-az1
- e2e-az2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: istio-manager
spec:
replicas: 1
template:
metadata:
annotations:
alpha.istio.io/sidecar: ignore
labels:
istio: manager
spec:
serviceAccountName: istio-manager-service-account
containers:
- name: discovery
image: harbor-001.jimmysong.io/library/manager:0.1.5
imagePullPolicy: Always
args: ["discovery", "-v", "2"]
ports:
- containerPort: 8080
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: apiserver
image: harbor-001.jimmysong.io/library/manager:0.1.5
imagePullPolicy: Always
args: ["apiserver", "-v", "2"]
ports:
- containerPort: 8081
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
参考:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
pod和node之间调度关系方式:
# 设置taint:
kubectl taint nodes node1 key1=value1:NoExecute
kubectl taint nodes node1 key2=value2:NoSchedule
# 取消taint:
kubectl taint nodes node1 key1:NoSchedule-
kubectl taint nodes node1 key1:NoExecute-
# 查看taint
kubectl describe nodes node1
# 配合toleration:
——在 pod 的 spec 中设置 tolerations 字段
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoExecute"
tolerationSeconds: 6000
# value 的值可以为 NoSchedule、PreferNoSchedule 或 NoExecute。
# operator的值可以是Equal和Exists
# tolerationSeconds 是当 pod 需要被驱逐时,可以继续在 node 上运行的时间。
Taint based Evictions(驱逐)and Condition:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
参考:
下面两个东西,想放到概念的最后面来研究
参考:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx # 这是Deployment的标签
spec:
replicas: 3
selector:
matchLabels:
app: nginx #这个必须和下面模板的标签匹配起来
template:
metadata:
labels:
app: nginx #这才是模板的标签
spec:
containers:
- name: nginx
image: nginx:1.7.9 # 被我改成了1.7.9,官方是1.15.4
ports:
- containerPort: 80
例子说明:
创建Deployment,执行下面的命令(我已经下载到了本地):
kubectl create -f nginx-deployment.yaml --record=true
# 注意:
# 将kubectl的 --record 的 flag 设置为 true可以在 annotation 中记录当前命令创建或者升级了该资源。
# 这在未来会很有用,例如,查看在每个 Deployment revision 中执行了哪些命令
[root@server apis]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 3s
字段说明:
UP-TO-DATE:显示已经升级到期望的状态的副本数
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-deployment-67594d6bf6-2l44w 1/1 Running 0 13s 172.17.2.21 client01 <none>
nginx-deployment-67594d6bf6-8tqbj 1/1 Running 0 13s 172.17.2.22 client01 <none>
nginx-deployment-67594d6bf6-wgddz 1/1 Running 0 13s 172.17.1.21 client02 <none>
nginx-p 2/2 Running 0 5h 172.17.1.19 client02 <none>
[root@server apis]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE LABELS
nginx-deployment-67594d6bf6-2l44w 1/1 Running 0 53s 172.17.2.21 client01 <none> app=nginx,pod-template-hash=2315082692
nginx-deployment-67594d6bf6-8tqbj 1/1 Running 0 53s 172.17.2.22 client01 <none> app=nginx,pod-template-hash=2315082692
nginx-deployment-67594d6bf6-wgddz 1/1 Running 0 53s 172.17.1.21 client02 <none> app=nginx,pod-template-hash=2315082692
nginx-p 2/2 Running 0 5h 172.17.1.19 client02 <none> <none>
[root@server apis]# kubectl rollout status deployment.v1.apps/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@server apis]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-67594d6bf6 3 3 3 8m
注意:
Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE].
The hash value is automatically generated when the Deployment is created.
注意:
注意: Deployment 的 rollout 当且仅当 Deployment 的 pod template(例如.spec.template)中的label更新或者镜像更改时被触发。其他更新,例如扩容Deployment不会触发 rollout。
[root@server apis]# kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record && kubectl rollout status deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment image updated
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out
[root@server apis]# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 24m
[root@server apis]# kubectl rollout status deployment.v1.apps/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@server apis]# kubectl get pods -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE LABELS
nginx-deployment-6fdbb596db-4zqwq 1/1 Running 0 37s 172.17.2.24 client01 <none> app=nginx,pod-template-hash=2986615286
nginx-deployment-6fdbb596db-l44hp 1/1 Running 0 40s 172.17.2.23 client01 <none> app=nginx,pod-template-hash=2986615286
nginx-deployment-6fdbb596db-qlfhq 1/1 Running 0 38s 172.17.1.22 client02 <none> app=nginx,pod-template-hash=2986615286
nginx-p 2/2 Running 0 5h 172.17.1.19 client02 <none> <none>
[root@server apis]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-67594d6bf6 0 0 0 25m
nginx-deployment-6fdbb596db 3 3 3 55
滚动升级策略:
RollingUpdateStrategy:
25% max unavailable:每次最多25%的pod不可用
25% max surge:每次最多25%的新pod启动
kubectl describe deployments
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 40m deployment-controller Scaled up replica set nginx-deployment-67594d6bf6 to 3
Normal ScalingReplicaSet 16m deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 1
Normal ScalingReplicaSet 16m deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 2
Normal ScalingReplicaSet 16m deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 2
Normal ScalingReplicaSet 16m deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 1
Normal ScalingReplicaSet 16m deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 3
Normal ScalingReplicaSet 16m deployment-controller Scaled down replica set nginx-deployment-67594d6bf6 to 0
分析:
第一步是创建的时候,直接将副本扩容到3
第二步是将新的rs扩容到1,同时将旧的rs副本降到2
第三步是将新的rs扩容到2,同时将旧的rs副本降到1
第四步是将新的rs扩容到3,同时将旧的rs副本降到0
第二种升级方式:
kubectl edit deployment.v1.apps/nginx-deployment
.spec.template.spec.containers[0].image from nginx:1.9.1 to nginx:1.15.4
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
kubernetes.io/change-cause: kubectl set image deployment.v1.apps/nginx-deployment
nginx=nginx:1.9.1 --record=true
creationTimestamp: 2018-12-14T06:47:03Z
generation: 2
labels:
app: nginx
name: nginx-deployment
namespace: default
resourceVersion: "837495"
selfLink: /apis/apps/v1/namespaces/default/deployments/nginx-deployment
uid: 108fefd9-ff6c-11e8-92e8-005056b6756e
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 2
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.9.1
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: 2018-12-14T06:47:04Z
lastUpdateTime: 2018-12-14T06:47:04Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-12-14T06:47:03Z
lastUpdateTime: 2018-12-14T07:11:29Z
message: ReplicaSet "nginx-deployment-6fdbb596db" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 3
replicas: 3
updatedReplicas: 3
注意:直接这样修改,使用kubectl describe deployments查看的时候,发现event并没有更新,模板也没有更新
注意(不去逐条翻译原文了):
比如将nginx的版本1.9.1写成了1.91
[root@server apis]# kubectl edit deployment.v1.apps/nginx-deployment
Edit cancelled, no changes made.
rollout将会卡住
[root@server apis]# kubectl rollout status deployments nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 out of 3 new replicas have been updated...
新的rs在启动一个pod,但是卡住了,旧的rs中3个pod仍然在
[root@server soft]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-58c7645486 1 1 0 1m
nginx-deployment-67594d6bf6 0 0 0 2h
nginx-deployment-6fdbb596db 3 3 3 1h
[root@server soft]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-58c7645486-gvgsm 0/1 ImagePullBackOff 0 2m
nginx-deployment-6fdbb596db-4zqwq 1/1 Running 0 1h
nginx-deployment-6fdbb596db-l44hp 1/1 Running 0 1h
nginx-deployment-6fdbb596db-qlfhq 1/1 Running 0 1h
★注意:
eployment controller会自动停止坏的 rollout,并停止扩容新的 ReplicaSet
[root@server soft]# kubectl describe deployment
......
OldReplicaSets: nginx-deployment-6fdbb596db (3/3 replicas created)
NewReplicaSet: nginx-deployment-58c7645486 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 5m deployment-controller Scaled up replica set nginx-deployment-58c7645486 to 1
并没有将1.91的打印出来???
[root@server soft]# kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create --filename=nginx-deployment.yaml --record=true
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.15.4 --record=true
3 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.15.4 --record=true
但是详细信息中写出来了
[root@server soft]# kubectl rollout history deployment/nginx-deployment --revision=3
deployments "nginx-deployment" with revision #3
Pod Template:
Labels: app=nginx
pod-template-hash=1473201042
Annotations: kubernetes.io/change-cause=kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.15.4 --record=true
Containers:
nginx:
Image: nginx:1.91
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
注意:
CHANGE-CAUSE 是创建时从Deployment的注解中拷贝过来,kubernetes.io/change-cause,可以手动指定
kubectl annotate deployment.v1.apps/nginx-deployment kubernetes.io/change-cause="image updated to 1.9.1"
回滚到前一个版本:
[root@server soft]# kubectl rollout undo deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
[root@server soft]# kubectl rollout undo deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
[root@server soft]# kubectl describe deployment
......
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-6fdbb596db (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 19m deployment-controller Scaled up replica set nginx-deployment-58c7645486 to 1
Normal DeploymentRollback 7s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
Normal ScalingReplicaSet 7s deployment-controller Scaled down replica set nginx-deployment-58c7645486 to 0
[root@server soft]# kubectl get deployment nginx-deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 2h
[root@server soft]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-58c7645486 0 0 0 19m
nginx-deployment-67594d6bf6 0 0 0 2h
nginx-deployment-6fdbb596db 3 3 3 2h
回滚到指定版本,注意最好看看详情
kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2
注意:
[root@server soft]# kubectl scale deployment.v1.apps/nginx-deployment --replicas=5
deployment.apps/nginx-deployment scaled
[root@server soft]# kubectl get deployment nginx-deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 5 5 5 5 2h
假设您的集群中启用了horizontal pod autoscaling,您可以给 Deployment 设置一个 autoscaler,基于当前 Pod的 CPU 利用率选择最少和最多的 Pod 数。
[root@server soft]# kubectl autoscale deployment.v1.apps/nginx-deployment --min=5 --max=7 --cpu-percent=80
horizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
[root@server soft]# kubectl get deployment nginx-deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 5 5 5 5 2h
比例扩容
RollingUpdate Deployment 支持同时运行一个应用的多个版本。
或者 autoscaler 扩 容 RollingUpdate Deployment 的时候,正在中途的 rollout(进行中或者已经暂停的),
为了降低风险,Deployment controller 将会平衡已存在的活动中的 ReplicaSet(有 Pod 的 ReplicaSet)和新加入的 replica。这被称为比例扩容。
例如,您正在运行中含有10个 replica 的 Deployment。maxSurge=3,maxUnavailable=2。
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 10 10 10 10 50s
您更新了一个镜像,而在集群内部无法解析。
$ kubectl set image deploy/nginx-deployment nginx=nginx:sometag
deployment "nginx-deployment" image updated
镜像更新启动了一个包含ReplicaSet nginx-deployment-1989198191的新的rollout,
但是它被阻塞了,因为我们上面提到的maxUnavailable。
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 5 5 0 9s
nginx-deployment-618515232 8 8 8 1m
然后发起了一个新的Deployment扩容请求。autoscaler将Deployment的repllica数目增加到了15个。
Deployment controller需要判断在哪里增加这5个新的replica。
如果我们没有使用比例扩容,所有的5个replica都会加到一个新的ReplicaSet中。如果使用比例扩容,新添加的replica将传播到所有的ReplicaSet中。大的部分加入replica数最多的ReplicaSet中,小的部分加入到replica数少的ReplciaSet中。0个replica的ReplicaSet不会被扩容。
在我们上面的例子中,3个replica将添加到旧的ReplicaSet中,2个replica将添加到新的ReplicaSet中。
rollout进程最终会将所有的replica移动到新的ReplicaSet中,假设新的replica成为健康状态。
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 15 18 7 8 7m
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1989198191 7 7 0 7m
nginx-deployment-618515232 11 11 11 7m
注意:
如果水平扩容打开了,升级时一定要小心,后面水平扩容期间再测试
感觉这个暂停和恢复功能主要用来做事件汇总(或者统一)处理的
# 清空一开始弄的nginx-deployment的deployment,重新建立,一定要确定删除了。
# 暂停deployment,只是停止了它的升级,并没有停止它对方提供服务的功能
[root@server apis]# kubectl rollout pause deployment/nginx-deployment
deployment.extensions/nginx-deployment paused
[root@server apis]# kubectl describe deployment/nginx-deployment
......
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing Unknown DeploymentPaused
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-6fdbb596db (5/5 replicas created)
......
# 更新镜像
[root@server apis]# kubectl set image deploy/nginx-deployment nginx=nginx:1.15.4
deployment.extensions/nginx-deployment image updated
[root@server apis]# kubectl describe deployment/nginx-deployment
......
Replicas: 5 desired | 0 updated | 5 total | 5 available | 0 unavailable
......
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing Unknown DeploymentPaused
OldReplicaSets: nginx-deployment-6fdbb596db (5/5 replicas created)
NewReplicaSet: <none>
......
# 更新资源限制
[root@server apis]# kubectl set resources deployment nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
deployment.extensions/nginx-deployment resource requirements updated
# 恢复deployment
[root@server apis]# kubectl rollout resume deployment nginx-deployment
deployment.extensions/nginx-deployment resumed
[root@server apis]# kubectl describe deployment/nginx-deployment
......
Replicas: 5 desired | 3 updated | 7 total | 4 available | 3 unavailable
......
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
OldReplicaSets: nginx-deployment-6fdbb596db (4/4 replicas created)
NewReplicaSet: nginx-deployment-6687fd74d4 (3/3 replicas created)
......
# 使用watch的功能,监控replicatSet的变化
# kubectl get rs -w
......
kubectl rollout status,可以使用此命令进行监控
kubectl patch deployment/nginx-deployment -p '{"spec":{"progressDeadlineSeconds":600}}'
$ kubectl describe deployment nginx-deployment
<...>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True ReplicaSetUpdated
ReplicaFailure True FailedCreate
<...>
如果是Deadline,结果会是:
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing False ProgressDeadlineExceeded
ReplicaFailure True FailedCreate
$ kubectl get deployment nginx-deployment -o yaml
status:
availableReplicas: 2
conditions:
......
- lastTransitionTime: 2016-10-04T12:25:39Z
lastUpdateTime: 2016-10-04T12:25:39Z
message: 'Error creating: pods "nginx-deployment-4262182780-" is forbidden: exceeded quota:
object-counts, requested: pods=1, used: pods=3, limited: pods=2'
reason: FailedCreate
status: "True"
type: ReplicaFailure
observedGeneration: 3
replicas: 2
unavailableReplicas: 2
所有对完成的 Deployment 的操作都适用于失败的 Deployment。您可以对它扩/缩容,回退到历史版本,您甚至可以多次暂停它来应用 Deployment pod template。
您可以设置 Deployment 中的 .spec.revisionHistoryLimit 项来指定保留多少旧的 ReplicaSet。 余下的将在后台被当作垃圾收集。apps/v1beta1版本中默认是3。apps/v1是10。
注意:
将该值设置为0,将导致所有的 Deployment 历史记录都会被清除,该 Deployment 就无法再回退了。
Deployment 也需要apiVersion,kind和metadata,.spec
注意:
您不应该再创建其他label跟这个selector匹配的pod,或者通过其他Deployment,或者通过其他Controller,例如ReplicaSet和ReplicationController。否则该Deployment会被把它们当成都是自己创建的。Kubernetes不会阻止您这么做。
如果您有多个controller使用了重复的selector,controller们就会互冲突导致无法预估的结果
.spec.strategy 指定新的Pod替换旧的Pod的策略
spec:
progressDeadlineSeconds: 600
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
......
所有的资源都可以使用kubectl patch命令结合json或者yaml格式对配置进行动态修改,和命令是一样的,它的原理是通过命令行方式修改kubectl edit
[root@server apis]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-6687fd74d4 5 5 5 4h
nginx-deployment-67594d6bf6 0 0 0 4h
nginx-deployment-6fdbb596db 0 0 0 4h
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-deployment-6687fd74d4-2prb2 1/1 Running 0 4h 172.17.1.39 client02 <none>
nginx-deployment-6687fd74d4-6v942 1/1 Running 0 4h 172.17.1.38 client02 <none>
nginx-deployment-6687fd74d4-gkrqs 1/1 Running 0 4h 172.17.2.38 client01 <none>
nginx-deployment-6687fd74d4-mq6n8 1/1 Running 0 4h 172.17.1.37 client02 <none>
nginx-deployment-6687fd74d4-p2wg2 1/1 Running 0 4h 172.17.2.37 client01 <none>
nginx-p 2/2 Running 7 3d 172.17.1.19 client02 <none>
[root@server apis]# kubectl describe deployment/nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 17 Dec 2018 09:37:13 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=3
Selector: app=nginx
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.15.4
Port: 80/TCP
Host Port: 0/TCP
Limits:
cpu: 200m
memory: 512Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-6687fd74d4 (5/5 replicas created)
Events: <none>
[root@server apis]# kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
[root@server apis]# kubectl patch deployment nginx-deployment -p '{"spec": {"rollbackTo": {"revision": 2}}}'
deployment.extensions/nginx-deployment patched
[root@server apis]# kubectl rollout status deployment/nginx-deployment
deployment "nginx-deployment" successfully rolled out
[root@server apis]# kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 <none>
3 <none>
4 <none>
[root@server apis]# kubectl describe deployment/nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Mon, 17 Dec 2018 09:37:13 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=4
Selector: app=nginx
Replicas: 5 desired | 5 updated | 5 total | 5 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-6fdbb596db (5/5 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 29s (x2 over 4h) deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 2
Normal ScalingReplicaSet 29s (x2 over 4h) deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 3
Normal ScalingReplicaSet 29s (x2 over 4h) deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 4
Normal ScalingReplicaSet 29s (x2 over 4h) deployment-controller Scaled up replica set nginx-deployment-6fdbb596db to 5
Normal DeploymentRollback 29s deployment-controller Rolled back deployment "nginx-deployment" to revision 2
Normal ScalingReplicaSet 29s deployment-controller Scaled down replica set nginx-deployment-6687fd74d4 to 4
Normal ScalingReplicaSet 29s deployment-controller Scaled down replica set nginx-deployment-6687fd74d4 to 3
Normal ScalingReplicaSet 29s deployment-controller Scaled down replica set nginx-deployment-6687fd74d4 to 2
Normal ScalingReplicaSet 28s deployment-controller Scaled down replica set nginx-deployment-6687fd74d4 to 1
Normal ScalingReplicaSet 28s deployment-controller Scaled down replica set nginx-deployment-6687fd74d4 to 0
另一个解决方案需要多个标签去区分deployment相同组件的不同版本或者不同配置,普通练习发版一个应用程序的紧挨着前一个版本的新版本金丝雀(通过在prod模板中指定不同的image标签),以便新版本在完全更新前能接受到生产的流量
比如:可以使用track标签去区分不同的版本
主力的:稳定版本可以有一个track标签,它的值是stable
name: frontend
replicas: 3
...
labels:
app: guestbook
tier: frontend
track: stable
...
image: gb-frontend:v3
创建一个新版本的guestbook frontend也有track标签,不过值和前面的不同是canary,这样两个集合中的pods不会重叠
name: frontend-canary
replicas: 1
...
labels:
app: guestbook
tier: frontend
track: canary
...
image: gb-frontend:v4
而前端的服务(service)可以通过选择两个集合的共同子标签囊括两个集合的副本,这样流量就可以转发到两个程序
selector:
app: guestbook
tier: frontend
您可以调整接收流量的稳定版本和金丝雀版本的副本数,而一旦(金丝雀版本)是确信的,你可以更新稳定版本到最新金丝雀的版本,同时删除金丝雀
相关命令:
# Deploy a canary
kubectl apply -f deployments/ghost-canary.yaml
# Roll out a new version
# Edit deployments/ghost.yaml and update the image:
- name: "ghost"
image: "kelseyhightower/ghost:0.7.8"
# Update the ghost deployment:
kubectl apply -f deployments/ghost.yaml
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
track: stable
template:
metadata:
labels:
app: nginx
track: stable
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
nginx-deployment-canary.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-canary
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
track: canary
template:
metadata:
labels:
app: nginx
track: canary
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
操作历史记录
kubectl create -f nginx-deployment.yaml
kubectl describe deployment/nginx-deployment
kubectl get deployment/nginx-deployment -o json
kubectl apply -f nginx-deployment-canary.yaml
kubectl describe deployment/nginx-deployment-canary
kubectl get rs
kubectl get deployment
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
kubectl get rs
当然生产一般不用这种方式,一般用lstio来做流量路由
发布扩展:
https://www.cnblogs.com/apanly/p/8784096.html
https://www.jianshu.com/p/022685baba7d
http://blog.itpub.net/28624388/viewspace-2158717/
应用实例参考:https://github.com/kubernetes/contrib/tree/master/statefulsets
https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
参考:
# 运行一个deployment,下面这个镜像还是蛮大的,建议提前下载下来
[root@server apis]# kubectl run hello-world --replicas=2 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
deployment.apps/hello-world created
[root@server apis]# kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-world 2 2 2 0 2m
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-world-86cddf59d5-tjjh5 0/1 ContainerCreating 0 1m <none> client02 <none>
hello-world-86cddf59d5-z2q7c 0/1 ContainerCreating 0 1m <none> client01 <none>
# 正在拉取镜像
[root@server apis]# kubectl describe pods hello-world-86cddf59d5-z2q7c
......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned default/hello-world-86cddf59d5-tjjh5 to client02
Normal Pulling 2m kubelet, client02 pulling image "gcr.io/google-samples/node-hello:1.0"
# Create a Service object that exposes the deployment:
[root@server apis]# kubectl expose deployment hello-world --type=NodePort --name=example-service
service/example-service exposed
[root@server apis]# kubectl describe service example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.106.196.12
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31696/TCP
Endpoints: 172.17.1.47:8080,172.17.2.48:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# 访问
# 通过容器ip:port
[root@server apis]# curl http://172.17.1.47:8080
Hello Kubernetes!
# 通过clusterip:port
[root@server apis]# curl http://10.106.196.12:8080
Hello Kubernetes!
# ping clusterip是不通的
[root@server apis]# ping 10.106.196.12
PING 10.106.196.12 (10.106.196.12) 56(84) bytes of data.
From 10.10.199.130 icmp_seq=1 Time to live exceeded
From 10.10.199.130 icmp_seq=2 Time to live exceeded
^C
--- 10.106.196.12 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1001ms
# 通过集群中的任一物理机加nodePort访问
[root@server apis]# curl http://10.40.2.228:31696
Hello Kubernetes!
[root@server apis]# curl http://10.40.2.229:31696
Hello Kubernetes!
[root@server apis]# curl http://10.40.2.230:31696
Hello Kubernetes!
查看配置文件:
# 查看配置:
[root@server apis]# kubectl edit service/example-service
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-18T10:01:15Z
labels:
run: load-balancer-example
name: example-service
namespace: default
resourceVersion: "1337064"
selfLink: /api/v1/namespaces/default/services/example-service
uid: db61beec-02ab-11e9-92e8-005056b6756e
spec:
clusterIP: 10.106.196.12
externalTrafficPolicy: Cluster
ports:
- nodePort: 31696 # 这是宿主机的端口
port: 8080 # 我的理解这是cluster ip的端口
protocol: TCP
targetPort: 8080 # 这是容器的端口
selector:
run: load-balancer-example
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
注意:其实可以指定某些节点映射nodePort
spec
externalIPs:
- 10.40.2.230 # 集群中某台宿主机的ip
- 10.40.2.228
# 参考前面的nginx的deployment的yaml配置,这里没有做任何改变
[root@server apis]# kubectl create -f nginx-deployment.yaml
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-world-86cddf59d5-tjjh5 1/1 Running 0 16h 172.17.1.47 client02 <none>
hello-world-86cddf59d5-z2q7c 1/1 Running 0 16h 172.17.2.48 client01 <none>
nginx-deployment-764f9dc96-2dtrg 1/1 Running 0 9m 172.17.2.49 client01 <none>
nginx-deployment-764f9dc96-9vqlh 1/1 Running 0 9m 172.17.2.50 client01 <none>
nginx-deployment-764f9dc96-bxsfx 1/1 Running 0 9m 172.17.1.49 client02 <none>
nginx-deployment-764f9dc96-gw9ss 1/1 Running 0 9m 172.17.1.48 client02 <none>
nginx-deployment-764f9dc96-wdsl6 1/1 Running 0 9m 172.17.2.51 client01 <none>
# 处于运行状态
[root@server apis]# kubectl get rs
NAME DESIRED CURRENT READY AGE
hello-world-86cddf59d5 2 2 2 16h
nginx-deployment-764f9dc96 5 5 5 10m
# 建立service配置文件
[root@server apis]# cat myservice.yaml
kind: Service
apiVersion: v1
metadata:
name: webapp
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 8081
targetPort: 80
protocol: TCP
nodePort: 31697
externalIPs:
- 10.40.2.230
- 10.40.2.228
[root@server apis]# kubectl create -f myservice.yaml
service/webapp created
[root@server apis]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.106.196.12 <none> 8080:31696/TCP 16h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h
webapp NodePort 10.100.227.245 10.40.2.230,10.40.2.228 8081:31697/TCP 5s
# 验证:
# cluster ip:port
[root@server apis]# curl -I -s http://10.100.227.245:8081 |grep 'HTTP/1.1'|awk '{print $2}'
200
# ★宿主机,service中配置了externalIPs为228和230,但是229也可以访问
[root@server apis]# curl -I -s http://10.40.2.228:31697 |grep 'HTTP/1.1'|awk '{print $2}'
200
[root@server apis]# curl -I -s http://10.40.2.230:31697 |grep 'HTTP/1.1'|awk '{print $2}'
200
[root@server apis]# curl -I -s http://10.40.2.229:31697 |grep 'HTTP/1.1'|awk '{print $2}'
200
★这里有一个误解:
externalIPs一般和ClusterIP结合使用,和NodePort没有效果
再看下面的例子:
# 删掉hello kubernetes的service,然后重建一个
[root@server apis]# kubectl delete service example-service
service "example-service" deleted
[root@server apis]# cat hello-world-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
externalIPs:
- 10.40.2.228
ports:
- port: 31696 # ClusterIP的端口,也是externalIPs的端口
protocol: TCP
targetPort: 8080
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
[root@server apis]# kubectl create -f hello-world-svc.yaml
service/hello-world-svc created
[root@server apis]# curl http://10.103.174.188:31696
Hello Kubernetes!
[root@server apis]# curl http://10.40.2.230:31696
curl: (7) Failed connect to 10.40.2.230:31696; Connection refused
注意:这里是ClusterIP和宿主机的端口是一样的,同时是ClusterIP类型
[root@server apis]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
hello-world-86cddf59d5-tjjh5 1/1 Running 0 16h 172.17.1.47 client02 <none>
hello-world-86cddf59d5-z2q7c 1/1 Running 0 16h 172.17.2.48 client01 <none>
nginx-deployment-764f9dc96-2dtrg 1/1 Running 0 21m 172.17.2.49 client01 <none>
nginx-deployment-764f9dc96-9vqlh 1/1 Running 0 21m 172.17.2.50 client01 <none>
nginx-deployment-764f9dc96-bxsfx 1/1 Running 0 21m 172.17.1.49 client02 <none>
nginx-deployment-764f9dc96-gw9ss 1/1 Running 0 21m 172.17.1.48 client02 <none>
nginx-deployment-764f9dc96-wdsl6 1/1 Running 0 21m 172.17.2.51 client01 <none>
# 非交互式执行命令,>符号一定要转义
[root@server apis]# kubectl exec nginx-deployment-764f9dc96-2dtrg -c nginx -- /bin/echo -e "pod01\n" \> /usr/share/nginx/html/index.html
pod01
> /usr/share/nginx/html/index.html
kubectl exec nginx-deployment-764f9dc96-2dtrg -c nginx -n default -- cat /usr/share/nginx/html/index.html
发现这个定向符好像没有生效,只是将结果打印到了终端,并没有定向到文件中
只有手动到每个pod中执行echo -e "pod01\n" > /usr/share/nginx/html/index.html将每一个nginx的pod中nginx的页面都定义一个标识加以区分开
# 测试结果:
[root@server apis]# curl http://10.40.2.230:31697
pod03
[root@server apis]# curl http://10.40.2.230:31697
pod02
[root@server apis]# curl http://10.40.2.230:31697
pod04
[root@server apis]# curl http://10.40.2.230:31697
pod05
# 修改sessionAffinity: ClientIP
[root@server apis]# kubectl edit service webapp
service/webapp edited
# 再次测试结果如下
[root@client01 soft]# curl http://10.40.2.230:31697
pod03
[root@client01 soft]# curl http://10.40.2.230:31697
pod03
[root@client01 soft]# curl http://10.40.2.230:31697
pod03
扩展:docker命令的输出格式
docker ps -f status=exited --format="{{.Names}}"
docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Status}}\t{{.Names}}"
安装ab命令
yum -y install httpd-tools
ab --help
-n在测试会话中所执行的请求个数。默认时,仅执行一个请求。请求的总数量
-c一次产生的请求个数。默认是一次一个。请求的用户量
-t测试所进行的最大秒数。其内部隐含值是-n 50000,它可以使对服务器的测试限制在一个固定的总时间以内。默认时,没有时间限制。
在不同的宿主机上压测不同的服务:
ab -c 4500 -n 100000 http://10.40.2.229:31696/
ab -c 4500 -n 100000 http://10.40.2.228:31697/
watch -n 1 -d "ps -e -o pid,uname,etime,rss,cmd --sort=-rss |egrep -v 'grep|kube-apiserver' |grep -i proxy"
发现集群中所有的内存基本都没有出现变动
压测hello kubernetes!的容器
# 对单个pod进行压测
[root@server ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.106.196.12 <none> 8080:31696/TCP 19h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
webapp NodePort 10.100.227.245 10.40.2.230,10.40.2.228 8081:31697/TCP 3h
[root@server ~]# ab -c 3500 -n 100000 http://172.17.1.47:8080/
Requests per second: 3360.32 [#/sec] (mean)
......
当把并发上升到4000时,会发生Connection reset by peer
这个测试结果不太准,上线之前一定要进行一下专业的测试。
个人每次用ab进行压测,结果都不一样,需要对以下两种方式进行对比
ClusterIP+externalIPs
NodePort
总结:
参考:
官方的东西太多了,不想去一行一行的翻译,总结了一些关键内容:
:
,可以从集群的外部访问一个 NodePort 服务。原理可以理解是每个node上的kube-proxy都会维护每个服务的路由表;注意这个一旦配置了,就所有节点都对外了。kind: Endpoints
apiVersion: v1
metadata:
name: my-service
subsets:
- addresses:
- ip: 1.2.3.4
ports:
- port: 9376
# 请求将被路由到用户定义的 Endpoint(该示例中为 1.2.3.4:9376)。
# Endpoint IP 地址不能是 loopback(127.0.0.0/8)、 link-local(169.254.0.0/16)、或者 link-local 多播(224.0.0.0/24)。
kind: Service
apiVersion: v1
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
- name: https
protocol: TCP
port: 443
targetPort: 9377
# 将deployment的副本改成1,一会之后立即又扩容到5个节点
[root@server apis]# kubectl scale deployment nginx-deployment --replicas=1
deployment.extensions/nginx-deployment scaled
[root@server apis]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx-deployment Deployment/nginx-deployment <unknown>/80% 5 7 1 4d
[root@server apis]# kubectl delete hpa nginx-deployment
horizontalpodautoscaler.autoscaling "nginx-deployment" deleted
internet
|
[ Ingress ]
--|-----|--
[ Services ]
要想ingress资源能生效,集群必须有一个ingress控制器在运行。这个不像其他类型的控制器,其他类型的控制器一般会作为二进制kube-controller-manager的一部分随集群一起启动。选择一个最适合你集群的ingress控制器:
你可以在一个集群内部部署多个ingress控制器,当你创建了一个ingress(对象),如果你的集群中不止有一个ingress控制器,应该注解(通过注解绑定)每个ingress对象到合适的ingress-class从而宣告使用哪个控制器。如果你没有定义一个类(ingress-class),你的云提供商可以用一个默认的ingress提供者。
注意:
不同的ingress控制器的操作可能有轻微的不同
一个最小的ingress例子:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
ingress规则:
下面的内容不去一一翻译了,可以参考官网
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
A fanout configuration routes traffic from a single IP address to more than one service,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1
servicePort: 4200
- path: /bar
backend:
serviceName: service2
servicePort: 8080
foo.bar.com --| |-> foo.bar.com s1:80
| 178.91.123.132 |
bar.foo.com --| |-> bar.foo.com s2:80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: bar.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
添加默认后端:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: second.foo.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- http:
paths:
- backend:
serviceName: service3
servicePort: 80
创建secret参考:https://kubernetes.io/docs/concepts/configuration/secret/
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: testsecret-tls
namespace: default
type: Opaque
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
rules:
- host: sslexample.foo.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
方式:
参考:https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
......
spec
hostAliases
- ip: 127.0.0.1
hostnames:
- "foo.local"
- "bar.local"
......
[root@server apis]# kubectl exec hello-world-86cddf59d5-w7kcr -- cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
172.17.1.54 hello-world-86cddf59d5-w7kcr
尝试用path去更新时报错:
[root@server apis]# kubectl patch pod hello-world-86cddf59d5-w7kcr -p '{"spec": {"hostAliases": [{"ip": "127.0.0.1","hostnames": ["foo.local"]}]}}'
The Pod "hello-world-86cddf59d5-w7kcr" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
......
$ echo -n "admin" | base64
YWRtaW4=
$ echo -n "1f2d1e2e67df" | base64
MWYyZDFlMmU2N2Rm
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm
username: YWRtaW4=
kubectl create -f secrets.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: db
name: db
spec:
volumes:
- name: secrets
secret:
secretName: mysecret
containers:
- image: gcr.io/my_project_id/pg:v1
name: db
volumeMounts:
- name: secrets
mountPath: "/etc/secrets"
readOnly: true
ports:
- name: cp
containerPort: 5432
hostPort: 5432
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: wordpress
visualize: "true"
spec:
containers:
- name: "wordpress"
image: "wordpress"
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
$ kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
$ cat ~/.docker/config.json | base64
$ cat > myregistrykey.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
data:
.dockerconfigjson: UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg==
type: kubernetes.io/dockerconfigjson
EOF
$ kubectl create -f myregistrykey.yaml
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: foo
image: janedoe/awesomeapp:v1
imagePullSecrets:
- name: myregistrykey
$ kubectl run nginx --image nginx
deployment "nginx" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-3137573019-md1u2 1/1 Running 0 13s
$ kubectl exec nginx-3137573019-md1u2 ls /run/secrets/kubernetes.io/serviceaccount
ca.crt
namespace
token
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: example-config
namespace: default
data:
example.property.1: hello
example.property.2: world
example.property.file: |-
property.1=value-1
property.2=value-2
property.3=value-3
$ ls docs/user-guide/configmap/kubectl/
game.properties
ui.properties
$ cat docs/user-guide/configmap/kubectl/game.properties
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
$ cat docs/user-guide/configmap/kubectl/ui.properties
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
$ kubectl create configmap game-config --from-file=docs/user-guide/configmap/kubectl
$ kubectl describe configmaps game-config
Name: game-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
game.properties: 158 bytes
ui.properties: 83 bytes
# 我们以yaml格式输出配置。
$ kubectl get configmaps game-config -o yaml
apiVersion: v1
data:
game.properties: |
enemies=aliens
lives=3
enemies.cheat=true
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:34:05Z
name: game-config
namespace: default
resourceVersion: "407"
selfLink: /api/v1/namespaces/default/configmaps/game-config
uid: 30944725-d66e-11e5-8cd0-68f728db1985
和上面从目录创建一样一样的,只是–from-file参数指定一个文件就行了
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
$ kubectl get configmaps special-config -o yaml
apiVersion: v1
data:
special.how: very
special.type: charm
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T19:14:38Z
name: special-config
namespace: default
resourceVersion: "651"
selfLink: /api/v1/namespaces/default/configmaps/special-config
uid: dadce046-d673-11e5-8cd0-68f728db1985
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
# 使用
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
envFrom:
- configMapRef:
name: env-config
restartPolicy: Never
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
# 使用
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
# 使用1
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/config/special.how" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
# 使用2——可以在ConfigMap值被映射的数据卷里控制路径
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh","-c","cat /etc/config/path/to/special-key" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
items:
- key: special.how
path: path/to/special-key
restartPolicy: Never
参考: https://jimmysong.io/kubernetes-handbook/concepts/configmap-hot-update.html
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
zone: us-est-coast
cluster: test-cluster1
rack: rack-22
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args:
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example-2
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox:1.24
command: ["sh", "-c"]
args:
- while true; do
echo -en '\n';
if [[ -e /etc/podinfo/cpu_limit ]]; then
echo -en '\n'; cat /etc/podinfo/cpu_limit; fi;
if [[ -e /etc/podinfo/cpu_request ]]; then
echo -en '\n'; cat /etc/podinfo/cpu_request; fi;
if [[ -e /etc/podinfo/mem_limit ]]; then
echo -en '\n'; cat /etc/podinfo/mem_limit; fi;
if [[ -e /etc/podinfo/mem_request ]]; then
echo -en '\n'; cat /etc/podinfo/mem_request; fi;
sleep 5;
done;
resources:
requests:
memory: "32Mi"
cpu: "125m"
limits:
memory: "64Mi"
cpu: "250m"
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
readOnly: false
volumes:
- name: podinfo
downwardAPI:
items:
- path: "cpu_limit"
resourceFieldRef:
containerName: client-container
resource: limits.cpu
- path: "cpu_request"
resourceFieldRef:
containerName: client-container
resource: requests.cpu
- path: "mem_limit"
resourceFieldRef:
containerName: client-container
resource: limits.memory
- path: "mem_request"
resourceFieldRef:
containerName: client-container
resource: requests.memory
hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中。该功能大多数 Pod 都用不到,但它为某些应用程序提供了一个强大的解决方法。
常见用途:
除了所需的 path 属性之外,用户还可以为 hostPath 卷指定 type:
值 | 行为 |
---|---|
空字符串 | 空字符串(默认)用于向后兼容,这意味着在挂载 hostPath 卷之前不会执行任何检查。 |
DirectoryOrCreate | 如果在给定的路径上没有任何东西存在,那么将根据需要在那里创建一个空目录,权限设置为 0755,与 Kubelet 具有相同的组和所有权。 |
Directory | 给定的路径下必须存在目录 |
FileOrCreate | 如果在给定的路径上没有任何东西存在,那么会根据需要创建一个空文件,权限设置为 0644,与 Kubelet 具有相同的组和所有权。 |
File | 给定的路径下必须存在文件 |
Socket | 给定的路径下必须存在 UNIX 套接字 |
CharDevice | 给定的路径下必须存在字符设备 |
BlockDevice | 给定的路径下必须存在块设备 |
注意:
案例:
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false # 不建议设置为True,如果为True,里面的程序将以特权模式root运行
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
apiVersion: v1
kind: Pod
metadata:
name: volume-test
spec:
containers:
- name: container-test
image: busybox
volumeMounts:
- name: all-in-one
mountPath: "/projected-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- secret:
name: mysecret2
items:
- key: password
path: my-group/my-password
mode: 511
apiVersion: v1
kind: Pod
metadata:
name: my-lamp-site
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "rootpasswd"
volumeMounts:
- mountPath: /var/lib/mysql
name: site-data
subPath: mysql
- name: php
image: php:7.0-apache
volumeMounts:
- mountPath: /var/www/html
name: site-data
subPath: html
volumes:
- name: site-data
persistentVolumeClaim:
claimName: my-lamp-site-data
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: busybox
command: [ "sh", "-c", "while [ true ]; do echo 'Hello'; sleep 10; done | tee -a /logs/hello.txt" ]
volumeMounts:
- name: workdir1
mountPath: /logs
subPath: $(POD_NAME)
restartPolicy: Never
volumes:
- name: workdir1
hostPath:
path: /data/log/pods
# 1.11.5实验过程中,子路径是$(POS_NAME),并没有换成对应的值,不知道是不是上面的feature是不是没有开启
参考:https://v1-11.docs.kubernetes.io/docs/concepts/storage/persistent-volumes/
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: dockerfile/nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
apiVersion: v1
kind: Pod
metadata:
name: pv-recycler
namespace: default
spec:
restartPolicy: Never
volumes:
- name: vol
hostPath:
path: /any/path/it/will/be/replaced
containers:
- name: pv-recycler
image: "k8s.gcr.io/busybox"
command: ["/bin/sh", "-c", "test -e /scrub && rm -rf /scrub/..?* /scrub/.[!.]* /scrub/* && test -z \"$(ls -A /scrub)\" || exit 1"]
volumeMounts:
- name: vol
mountPath: /scrub
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-vol-default
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://192.168.10.100:8080"
restuser: ""
secretNamespace: ""
secretName: ""
allowVolumeExpansion: true
kubectl describe pvc
,If the PersistentVolumeClaim has the status FileSystemResizePending, it is safe to recreate the pod using the PersistentVolumeClaim一个StorageClass为管理员描述他们供应的存储类型提供了方式。不同的类型依赖管理员映射不同的服务等级、备份策略、属性策略等等。
每个StorageClass包含provisioner, parameters, and reclaimPolicy字段,这些字段将被动态的提供在属于它的PV上。
StorageClass 对象的名字是一个标志,也是用户请求一个独有的类的方式。管理员在第一次创建时可以为这个StorageClass设置这个名字或者其他参数,一旦这个对象呗创建将不能更新。
管理员可以为没有指定绑定什么类的pvc指定默认的StorageClass进行绑定
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
- debug
常见的:
Volume Plugin | Internal Provisioner | Config Example |
---|---|---|
CephFS | no | 外部提供者参考 |
Cinder | yes | 后面有例子 |
Glusterfs | yes | 后面有例子 |
iSCSI | no | 外部提供者参考 |
NFS | no | 外部提供者参考 |
RBD(Ceph RBD) | yes | 后面有例子 |
Local | no | 后面有例子 |
Reclaim Policy
挂载参数
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://127.0.0.1:8081"
clusterid: "630372ccdc720a92c681fb928f27b53f"
restauthenabled: "true"
restuser: "admin"
secretNamespace: "default"
secretName: "heketi-secret"
gidMin: "40000"
gidMax: "50000"
volumetype: "replicate:3"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast
availability: nova
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789
adminId: kube
adminSecretName: ceph-secret
adminSecretNamespace: kube-system
pool: kube
userId: kube
userSecretName: ceph-secret-user
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
kubectl create secret generic ceph-secret --type="kubernetes.io/rbd" \
--from-literal=key='QVFEQ1pMdFhPUnQrSmhBQUFYaERWNHJsZ3BsMmNjcDR6RFZST0E9PQ==' \
--namespace=kube-system
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer