调度方式用于将pod资源调度到相应的node上,可自动分配也可自己指定
用于将pod调度到指定node上(跳过调度器直接分配)
[root@master demo]# vim pod-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
nodeName: 20.0.0.30
containers:
- name: nginx
image: nginx:1.15
查看详细事件,看到events中并没有经过调度器scheduler
[root@master demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod-example 1/1 Running 0 108s 172.17.3.3 20.0.0.30
[root@master demo]# kubectl describe pod pod-example
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 3m55s kubelet, 20.0.0.30 pulling image "nginx:1.15"
Normal Pulled 3m25s kubelet, 20.0.0.30 Successfully pulled image "nginx:1.15"
Normal Created 3m25s kubelet, 20.0.0.30 Created container
Normal Started 3m25s kubelet, 20.0.0.30 Started container
用于将pod调度到匹配label的node上
[root@master demo]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
20.0.0.30 Ready 14d v1.12.3
20.0.0.40 Ready 14d v1.12.3
[root@master demo]# kubectl label nodes 20.0.0.30 njit=a
node/20.0.0.30 labeled
[root@master demo]# kubectl label nodes 20.0.0.40 njit=b
node/20.0.0.40 labeled
[root@master demo]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
20.0.0.30 Ready 14d v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=20.0.0.30,njit=a
20.0.0.40 Ready 14d v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=20.0.0.40,njit=b
[root@master demo]# vim pod-ns.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-example
labels:
app: nginx
spec:
nodeSelector:
njit: b
containers:
- name: nginx
image: nginx:1.15
[root@master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-example 1/1 Running 0 83s
查看详细事件,看到events中有记录经过了scheduler
[root@master demo]# kubectl describe pod pod-example
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m2s default-scheduler Successfully assigned default/pod-example to 20.0.0.40
Normal Pulling 2m1s kubelet, 20.0.0.40 pulling image "nginx:1.15"
Normal Pulled 98s kubelet, 20.0.0.40 Successfully pulled image "nginx:1.15"
Normal Created 97s kubelet, 20.0.0.40 Created container
Normal Started 97s kubelet, 20.0.0.40 Started container
值 | 描述 |
---|---|
Pending | Pod创建已经提交到Kubernetes。但是,因为某种原因而不能顺利创建。例如下载镜像慢,调度不成功。 |
Running | Pod已经绑定到一个节点,并且已经创建了所有容器。至少有一个容器正在运行中,或正在启动或重新启动。. |
Succeeded | Pod中的所有容器都已成功终止,不会重新启动。 |
Failed | Pod的所有容器均已终止,且至少有一个容器己在故障中终止。也就是说,容器要么以非零状态退出,要么被系统终止。 |
Unknown | 由于某种原因apiserver无法获得Pod的状态,通常是由于Master与 Pod所在主机kubelet通信时出错。 |
控制器:又称之为工作负载,包含以下5种类型的控制器
controllers:在群集上管理和运行容器,用过label-select相关联
pod通过控制器实现应用的运维,如伸缩,升级等
应用场景:web服务
[root@master demo]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
[root@master demo]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
查看deploy控制器,rs副本集
[root@master demo]# kubectl get pods,deploy,rs
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-d55b94fd-4fdkj 1/1 Running 0 81s
pod/nginx-deployment-d55b94fd-947w9 1/1 Running 0 81s
pod/nginx-deployment-d55b94fd-h2jkh 1/1 Running 0 81s
pod/pod-example 1/1 Running 0 13h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/nginx-deployment 3 3 3 3 82s
NAME DESIRED CURRENT READY AGE
replicaset.extensions/nginx-deployment-d55b94fd 3 3 3 81s
查看控制器,其中replicaset可以控制版本更新,副本数,回滚
[root@master demo]# kubectl edit deployment/nginx-deployment
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
其中strategy中定义滚动更新rollingUpdate
maxSurge为25%,表示如果有100个pod,进行业务更新时在原来的基础上最多多创建25个pod达到125个pod
maxUnavailable为25%,表示如果有100个pod,进行业务更新时,最多删除原来的pod25个,
maxUnavailable这个值一定要小于maxSurge的限度,因为在副本集的限制下pod的数量保持一定的数量,要是删除太多,就恢复创建不回来了
查看历史版本
[root@master demo]# kubectl rollout history deployment/nginx-deployment
deployment.extensions/nginx-deployment
REVISION CHANGE-CAUSE
1
无状态:
1.deployment认为所有的pod都是一样的
2.不用考虑顺序的要求
3.不用考虑在哪个pod节点上运行
4.可以随意扩容和缩容
有状态:
1.实例之间有差别,每个实例都有自己的独特性,元数据不同,例如etcd,zookeeper
2.实例之间不对等的关系,以及依靠外部存储的应用
在每个node上运行一个pod
新加入的node也同样会自动运行一个pod
引用场景:Agent
https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
Job分为普通任务(Job)和定时任务(CronJob)
一次性执行
应用场景:离线数据处理,视频解码等业务
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/
周期性任务
周期性任务
应用场景:通知,备份
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/
service:一组pod访问策略,提供cluster-ip群集之间的通信,还提供负载均衡和服务发现
headless service :无头服务,不需要cluster-ip,直接绑定具体的pod的ip
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 15d
nginx-service NodePort 10.0.0.223 80:42524/TCP 8m57s
在node节点上,使用cluster-ip进行群集间通信,要是不能通信尝试iptables -F和setenforce 0
[root@node01 ~]# curl 10.0.0.223
Welcome to nginx!
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
因为pod数量的伸缩,Pod的地址是经常变化的,所以绑定DNS访问
设置clusterIP为none
[root@master demo]# vim headless.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
我们看到nginx的CLUSTER-IP为None
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 15d
nginx ClusterIP None 80/TCP 21s
nginx-service NodePort 10.0.0.223 80:42524/TCP 16m
绑定DNS
https://www.kubernetes.org.cn/4694.html
将coredns.yaml文件放到master的家目录下
[root@master ~]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
[root@master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-56684f94d6-85j72 1/1 Running 0 2m41s
kubernetes-dashboard-7dffbccd68-kvlqd 1/1 Running 1 2d17h
创建一个资源来,验证下dns解析功能
[root@master demo]# vim pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: dns-test
spec:
containers:
- name: busybox
image: busybox:1.28.4
args:
- /bin/sh
- -c
- sleep 36000
restartPolicy: Never
[root@master demo]# kubectl create -f pod3.yaml
pod/dns-test created
[root@master demo]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 15d
nginx ClusterIP None 80/TCP 30m
nginx-service NodePort 10.0.0.223 80:42524/TCP 46m
验证解析
[root@master demo]# kubectl exec -it dns-test sh
/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
/ # nslookup nginx-service
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-service
Address 1: 10.0.0.223 nginx-service.default.svc.cluster.local
清理下master上的资源
[root@master demo]# kubectl delete -f .
重新创建资源,来使用DNS解析pod的ip
[root@master demo]# vim sts.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: nginx-statefulset
namespace: default
spec:
serviceName: nginx
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
创建资源
[root@master demo]# kubectl create -f sts.yaml
[root@master demo]# kubectl create -f /root/coredns.yaml
[root@master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod/dns-test 1/1 Running 0 44m
pod/nginx-statefulset-0 1/1 Running 0 46m
pod/nginx-statefulset-1 1/1 Running 0 46m
pod/nginx-statefulset-2 1/1 Running 0 45m
[root@master demo]# kubectl apply -f pod3.yaml
pod/dns-test created
[root@master demo]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
dns-test 1/1 Running 0 7m31s 172.17.47.3 20.0.0.40
nginx-statefulset-0 1/1 Running 0 9m1s 172.17.47.2 20.0.0.40
nginx-statefulset-1 1/1 Running 0 8m44s 172.17.3.2 20.0.0.30
nginx-statefulset-2 1/1 Running 0 8m22s 172.17.3.3 20.0.0.30
如果解析不出来,就把dns重新创一遍,kubectl delete -f coredns.yaml 然后kubectl create -f coredns.yaml
[root@master demo]# kubectl exec -it dns-test sh
/ # nslookup nginx-statefulset-0.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-0.nginx
Address 1: 172.17.47.2 nginx-statefulset-0.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-1.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-1.nginx
Address 1: 172.17.3.2 nginx-statefulset-1.nginx.default.svc.cluster.local
/ # nslookup nginx-statefulset-2.nginx
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local
Name: nginx-statefulset-2.nginx
Address 1: 172.17.3.3 nginx-statefulset-2.nginx.default.svc.cluster.local
k8s中的配置管理,可用在引用参数、挂载准备好的文件,可用在需要修改大量配置文件的场景中
加密数据并存放在etcd中,让pod容器以挂载volume方式访问
应用场景:凭据
生成secret方式一:
[root@master demo]# echo -n 'admin' > ./username.txt
[root@master demo]# echo -n '123456' > ./password.txt
[root@master demo]# kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt
secret/db-user-pass created
[root@master demo]# kubectl get secret
NAME TYPE DATA AGE
db-user-pass Opaque 2 10s
default-token-cscgs kubernetes.io/service-account-token 3 16d
[root@master demo]# kubectl describe secret db-user-pass
Name: db-user-pass
Namespace: default
Labels:
Annotations:
Type: Opaque
Data
====
password.txt: 6 bytes
username.txt: 5 bytes
生成secret文件方式二:
[root@master demo]# vim secret-var.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
我们可以将secret类型文件中的参数传递到pod中
其中env下就是设置一些环境变量,变量名SECRET_USERNAME和SECRET_PASSWORD,变量的值的来源是mysecret中的username和password
创建相应Pod,相应变量和值就被传递到了其中
[root@master demo]# kubectl apply -f secret-var.yaml
pod/mypod created
[root@master demo]# kubectl exec -it mypod bash
root@mypod:/# echo $SECRET-USERNAME
-USERNAME
root@mypod:/# echo $SECRET_USERNAME
admin
root@mypod:/# echo $SECRET_PASSWORD
123456
挂载文件
[root@master demo]# vim secret-vol.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
其中volumes下是被挂载的卷mysecret并命名成foo,需要挂载只要使用这个名字即可
volumeMounts是挂载,挂载路径在/etc/foo
[root@master demo]# kubectl delete -f secret-var.yaml
pod "mypod" deleted
[root@master demo]# kubectl create -f secret-vol.yaml
pod/mypod created
[root@master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 7h40m
mypod 1/1 Running 0 18s
nginx-statefulset-0 1/1 Running 0 7h41m
nginx-statefulset-1 1/1 Running 0 7h41m
nginx-statefulset-2 1/1 Running 0 7h40m
进入容器中查看挂载
[root@master demo]# kubectl exec -it mypod bash
root@mypod:/# ls /etc/fo
fonts/ foo/
root@mypod:/# ls /etc/fo
fonts/ foo/
root@mypod:/# ls /etc/foo/
password username
root@mypod:/# cd /etc/foo/
root@mypod:/etc/foo# cat password
123456root@mypod:/etc/foo# cat username
adminroot@mypod:/etc/foo#
与secret类似,区别在于ConfigMap保存的是不需要加密的配置信息
应用场景:应用配置
[root@master demo]# vim redis.properties
redis.host=127.0.0.1
redis.port=6379
redis.password=123456
[root@master demo]# kubectl create configmap redis-confg --from-file=redis.properties
configmap/redis-confg created
[root@master demo]# kubectl get configmap
NAME DATA AGE
redis-confg 1 10s
[root@master demo]# kubectl describe cm redis-confg
Name: redis-confg
Namespace: default
Labels:
Annotations:
Data
====
redis.properties:
----
redis.host=127.0.0.1
redis.port=6379
redis.password=123456
Events:
变量参数
创建configmap资源
[root@master demo]# vim myconfig.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: myconfig
namespace: default
data:
special.level: info
special.type: hello
[root@master demo]# kubectl apply -f myconfig.yaml
configmap/myconfig created
[root@master demo]# vim config-var.yaml
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox
image: busybox
command: [ "/bin/sh", "-c", "echo $(LEVEL) $(TYPE)" ]
env:
- name: LEVEL
valueFrom:
configMapKeyRef:
name: myconfig
key: special.level
- name: TYPE
valueFrom:
configMapKeyRef:
name: myconfig
key: special.type
restartPolicy: Never
其中创建容器并输出两个变量的值,LEVEL来源myconfig文件中的special.level,TYPE来自myconfig中的special.type
删除之前创建的资源
[root@master demo]# kubectl delete pod mypod
创建资源
[root@master demo]# kubectl create -f config-var.yaml
pod/mypod created
[root@master demo]# kubectl get pods
NAME READY STATUS RESTARTS AGE
dns-test 1/1 Running 0 8h
mypod 0/1 Completed 0 30s
查看变量输出
[root@master demo]# kubectl logs mypod
info hello