官方解释
示例:
apiVersion: v1
kind: pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: myapp:v1
nodeName: server4
此时能看到在server3和server4上各有一个Pod:
手动指定node节点名称的问题在于,若该节点名称不存在,则会调度失败:
kubectl label nodes server2 disktype=ssd
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disktype: ssd
给server3节点打标签:kubectl label nodes server3 disktype=ssd
查看各节点上的所有标签:kubectl get node --show-labels
查看带指定标签的节点:kubectl get nodes -l disktype
查看指定标签key值包含的节点:kubectl get node -L disktype
去掉节点上的标签类型:kubectl label node server3 disktype-
运行,此时即在server3上执行:
亲和与反亲和:
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- sata
给server4添加标签:kubectl label nodes server4 disktype=sata
在调度时,每次根据配置清单,必须先满足硬限要求,再是软限要求
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
myapp与nginx节点保持一致:
对于反亲和性,只需更改一个关键词:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:v1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
server2作为管理节点,默认设置了不参与调度的污点:
创建:kubectl taint nodes node1 key=value:NoSchedule
查询:kubectl describe nodes server2 |grep Taints
删除: kubectl taint nodes node1 key:NoSchedule-
设置的污点范围:
Tolerations中定义的key,value,effect要与node上设置的taint保持一致
后续创建的Pod不会在该节点上:kubectl cordon server3
直接驱逐本节点上所有Pod至其他节点,且后续其他Pod不会在调度至本节点,注意本操作需要能让本节点的Pod能有调度到其他节点,且需要忽略DaemonSet控制器:kubectl drain server3
删除节点:kubectl delete server3
重新加入集群(k8s的自动注册功能):systemctl restart kubelet
正常的维护逻辑(比如关闭节点升级硬件)是,首先执行drain,再delete
主要用于集群内部:认证,授权,(准入控制),
将添加的用于拉取镜像的secrets与创建的sc进行绑定,然后在策略文件中添加sc,即可拉取私有仓库:kubectl patch serviceaccounts default -p '{"imagePullSecrets": [{"name": "myregistrykey"}]}'
主要用于登陆集群
进入k8s存放证书目录:/etc/kubernetes/pki
生成一个测试证书:openssl genrsa -out test.key 2048
由test.key生成一个证书的请求:openssl req -new -key test.key -out test.csr -subj "/CN=test"
由csr文件生成证书的有效期限365天:openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 365
最终生成test.crt
查询配置:kubectl config view
若要创建test用户访问集群,也需要配置相关证书文件:kubectl config set-credentials test --client-certificate=/etc/kubernetes/pki/test.crt --client-key=/etc/kubernetes/pki/test.key --embed-certs=true
此时除了默认账户,出现了新建的test:
设置test用户的contest,其实就是绑定,用来作切换:kubectl config set-context test@kubernetes --cluster=kubernetes --user=test
使用test用户:kubectl config use-context test@kubernetes
此时已经完成了test用户的认证功能,但是还未对其进行授权,无法使用集群资源
此时需要先切换到管理员对其授权:kubectl config use-context kubernetes-admin@kubernetes
RBAC
默认的授权是RBAC(Role Based Access Control,基于角色访问控制授权):
RBAC的萨格基本概念:
Role和ClusterRole
创建角色:
vim roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: myrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list","create","update","patch","delete"]
两种绑定方式:RoleBinding和ClusterRoleBinding
使用Rolebingding的方式,必须指定namespace(此处为默认),被作用者(user account,此处为新建的test用户,roleref为角色绑定),以下为新建角色及绑定的完整过程
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: myrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list","create","update","patch","delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-read-pods
namespace: default
subjects:
- kind: User
name: test
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: myrole
apiGroup: rbac.authorization.k8s.io
此时,切换用户测试,可以看到,可以使用集群资源,但是只能在默认的namespace下查查看,无权查看kube-system的pod:
切换回admin,进一步设置一个clusterrolebinding,其中包含了可以对pod和deployments作操作,并绑定 :
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: myrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list","create","update","patch","delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-read-pods
namespace: default
subjects:
- kind: User
name: test
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: myrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myclusterrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list","create","update"]
- apiGroups: ["extensions","apps"]
resources: ["deployments"]
verbs: ["get","watch","list","create","update","patch","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rolebind-myclusterrole
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: test
此时,即可使用deployment控制器,但是仍然不能查看别的nameplace
最后,添加clusterrolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: clusterrolebinding-myclusterrole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: test
额外补充: