将调度程序二进制文件打包到容器映像中。出于本示例的目的,我们只使用默认调度程序(kube-scheduler)作为我们的第二个调度程序。从Github克隆Kubernetes源代码 并构建源代码。
git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
make
创建包含kube-scheduler二进制文件的容器映像。这是Dockerfile
构建图像:
FROM busybox
ADD ./_output/dockerized/bin/linux/amd64/kube-scheduler /usr/local/bin/kube-scheduler
将文件另存为Dockerfile,构建映像并将其推送到注册表。此示例将图像推送到 Google容器注册表(GCR)。有关详细信息,请阅读GCR 文档。
docker build -t gcr.io/my-gcp-project/my-kube-scheduler:1.0 .
gcloud docker -- push gcr.io/my-gcp-project/my-kube-scheduler:1.0
既然我们在容器映像中有调度程序,我们就可以为它创建一个pod配置并在我们的Kubernetes集群中运行它。但是,不是直接在群集中创建pod,而是让我们使用Deployment 作为此示例。一个部署管理一个 副本集这反过来管理豆荚,从而使调度弹性的失败。这是部署配置。保存为my-scheduler.yaml
:
admin/sched/my-scheduler.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-scheduler-as-kube-scheduler
subjects:
- kind: ServiceAccount
name: my-scheduler
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:kube-scheduler
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
selector:
matchLabels:
component: scheduler
tier: control-plane
replicas: 1
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
spec:
serviceAccountName: my-scheduler
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=false
- --scheduler-name=my-scheduler
image: gcr.io/my-gcp-project/my-kube-scheduler:1.0
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: kube-second-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
securityContext:
privileged: false
volumeMounts: []
hostNetwork: false
hostPID: false
volumes: []
这里需要注意的一件重要事情是,在容器规范中指定为scheduler命令的参数的调度程序的名称应该是唯一的。这是与spec.schedulerNamepod上可选项的值匹配的名称,用于确定此调度程序是否负责计划特定pod。
另请注意,我们创建了一个专用服务帐户my-scheduler并将群集角色绑定system:kube-scheduler到该帐户, 以便它可以获得与其相同的权限kube-scheduler。
有关其他命令行参数的详细说明,请参阅 kube-scheduler文档。
要在Kubernetes集群中运行调度程序,只需在Kubernetes集群中创建上面配置中指定的部署:
kubectl create -f my-scheduler.yaml
验证调度程序窗格正在运行:
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
....
my-scheduler-lnf4s-4744f 1/1 Running 0 2m
...
除了此列表中的默认kube-scheduler pod之外,您还应该看到“正在运行”my-scheduler pod。
要在启用了领导者选举的情况下运行多调度程序,您必须执行以下操作:
首先,更新YAML文件中的以下字段:
--leader-elect=true
--lock-object-namespace=lock-object-namespace
--lock-object-name=lock-object-name
如果在群集上启用了RBAC,则必须更新system:kube-scheduler群集角色。将调度程序名称添加到应用于端点资源的规则的resourceNames,如以下示例所示:
kubectl edit clusterrole system:kube-scheduler
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-scheduler
rules:
- apiGroups:
- ""
resourceNames:
- kube-scheduler
- my-scheduler
resources:
- endpoints
verbs:
- delete
- get
- patch
- update
admin/sched/pod3.yaml
apiVersion: v1
kind: Pod
metadata:
name: annotation-second-scheduler
labels:
name: multischeduler-example
spec:
schedulerName: my-scheduler
containers:
- name: pod-with-second-annotation-container
image: k8s.gcr.io/pause:2.0
在这种情况下,我们指定应使用我们部署的调度程序来安排此pod my-scheduler
。请注意,值spec.schedulerName
应与提供给scheduler
命令的名称匹配,作为调度程序的部署配置中的参数。
将此文件另存为pod3.yaml
并将其提交给Kubernetes
集群。
kubectl create -f pod3.yaml
{ kind: 'Policy',
apiVersion: 'v1',
predicates:
[ { name: 'PodFitsHostPorts', order: 2 },
{ name: 'PodFitsResources', order: 3 },
{ name: 'NoDiskConflict', order: 5 },
{ name: 'PodToleratesNodeTaints', order: 4 },
{ name: 'MatchNodeSelector', order: 6 },
{ name: 'PodFitsHost', order: 1 } ],
priorities:
[ { name: 'LeastRequestedPriority', weight: 1 },
{ name: 'BalancedResourceAllocation', weight: 1 },
{ name: 'ServiceSpreadingPriority', weight: 1 },
{ name: 'EqualPriority', weight: 1 } ],
hardPodAffinitySymmetricWeight: 10 }
自定义调度需要修改my-scheuler
的yaml
文件:
添加:
- --policy-configmap=my-scheduler-config
- --policy-configmap-namespace=kube-system
如下所示:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-scheduler-as-kube-scheduler
subjects:
- kind: ServiceAccount
name: my-scheduler
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:kube-scheduler
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
selector:
matchLabels:
component: scheduler
tier: control-plane
replicas: 1
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
spec:
serviceAccountName: my-scheduler
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --leader-elect=false
- --scheduler-name=my-scheduler
- --policy-configmap=my-scheduler-config
- --policy-configmap-namespace=kube-system
image: gcr.io/my-gcp-project/my-kube-scheduler:1.0
livenessProbe:
httpGet:
path: /healthz
port: 10251
initialDelaySeconds: 15
name: kube-second-scheduler
readinessProbe:
httpGet:
path: /healthz
port: 10251
resources:
requests:
cpu: '0.1'
securityContext:
privileged: false
volumeMounts: []
hostNetwork: false
hostPID: false
volumes: []