[k8s源码分析][kube-scheduler]scheduler/algorithm之预选predicate

1. 前言

转载请说明原文出处, 尊重他人劳动成果!

本文将分析调度器中的预选方法, 主要涉及pkg/scheduler/algorithm/predicate/predicates.gopkg/scheduler/algorithm/type.go
源码位置: https://github.com/nicktming/kubernetes
分支: tming-v1.13 (基于v1.13版本)

2. 预选方法定义(predicate)

type FitPredicate func(pod *v1.Pod, meta PredicateMetadata, nodeInfo *schedulercache.NodeInfo) (bool, []PredicateFailureReason, error)
// PredicateMetadataProducer is a function that computes predicate metadata for a given pod.
type PredicateMetadataProducer func(pod *v1.Pod, nodeNameToInfo map[string]*schedulercache.NodeInfo) PredicateMetadata

// PredicateMetadata interface represents anything that can access a predicate metadata.
type PredicateMetadata interface {
    ShallowCopy() PredicateMetadata
    AddPod(addedPod *v1.Pod, nodeInfo *schedulercache.NodeInfo) error
    RemovePod(deletedPod *v1.Pod) error
}

可以看到FitPredicate就是预选方法
pod: 需要调度的pod.
meta: 一个PredicateMetadata(对predicateMetadata目前可以不用在意, 因为不会影响到整个对预选方法的理解, 会在后面有分析)
nodeInfo: 就是节点信息
返回该pod在该节点nodeInfo是否可以通过, 如果不通过, 理由也会返回.

所以接下来就分析几个常见的预选方法. 因为在[k8s源码分析][kube-scheduler]scheduler/algorithmprovider之注册default-scheduler 已经有涉及到, 所以直接就从pkg/scheduler/algorithmprovider/defaults/defaults.go中选几个简单看看就可以了.

3. 预选方法

3.1 PodFitsHostPorts

判断该节点中使用了的port是否与requested pod ports有冲突

func PodFitsHostPorts(pod *v1.Pod, meta algorithm.PredicateMetadata, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
    var wantPorts []*v1.ContainerPort
    // 如果meta可以转成predicateMetadata 就从meta中取
    // 这里不用太在意meta, 因为该meta如果不为nil的话 其实就是从该pod中做了一些操作而已
    if predicateMeta, ok := meta.(*predicateMetadata); ok {
        wantPorts = predicateMeta.podPorts
    } else {
        // We couldn't parse metadata - fallback to computing it.
        wantPorts = schedutil.GetContainerPorts(pod)
    }
    if len(wantPorts) == 0 {
        return true, nil, nil
    }

    // 从该节点信息中拿到该节点已经使用过的端口
    existingPorts := nodeInfo.UsedPorts()

    // try to see whether existingPorts and  wantPorts will conflict or not
    //判断是否有冲突
    if portsConflict(existingPorts, wantPorts) {
        return false, []algorithm.PredicateFailureReason{ErrPodNotFitsHostPorts}, nil
    }

    return true, nil, nil
}

3.2 PodFitsResources

判断资源是否充足

func PodFitsResources(pod *v1.Pod, meta algorithm.PredicateMetadata, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
    node := nodeInfo.Node()
    if node == nil {
        return false, nil, fmt.Errorf("node not found")
    }

    var predicateFails []algorithm.PredicateFailureReason
    allowedPodNumber := nodeInfo.AllowedPodNumber()
    // 如果该节点所可以容纳的pod数量达到上限时
    if len(nodeInfo.Pods())+1 > allowedPodNumber {
        predicateFails = append(predicateFails, NewInsufficientResourceError(v1.ResourcePods, 1, int64(len(nodeInfo.Pods())), int64(allowedPodNumber)))
    }

    // No extended resources should be ignored by default.
    ignoredExtendedResources := sets.NewString()

    var podRequest *schedulercache.Resource
    if predicateMeta, ok := meta.(*predicateMetadata); ok {
        podRequest = predicateMeta.podRequest
        if predicateMeta.ignoredExtendedResources != nil {
            ignoredExtendedResources = predicateMeta.ignoredExtendedResources
        }
    } else {
        // We couldn't parse metadata - fallback to computing it.
        podRequest = GetResourceRequest(pod)
    }
    if podRequest.MilliCPU == 0 &&
        podRequest.Memory == 0 &&
        podRequest.EphemeralStorage == 0 &&
        len(podRequest.ScalarResources) == 0 {
        return len(predicateFails) == 0, predicateFails, nil
    }

    allocatable := nodeInfo.AllocatableResource()
    // 判断cpu部分  都是按request计算的
    if allocatable.MilliCPU < podRequest.MilliCPU+nodeInfo.RequestedResource().MilliCPU {
        predicateFails = append(predicateFails, NewInsufficientResourceError(v1.ResourceCPU, podRequest.MilliCPU, nodeInfo.RequestedResource().MilliCPU, allocatable.MilliCPU))
    }
    // 判断memory部分  都是按request计算的
    if allocatable.Memory < podRequest.Memory+nodeInfo.RequestedResource().Memory {
        predicateFails = append(predicateFails, NewInsufficientResourceError(v1.ResourceMemory, podRequest.Memory, nodeInfo.RequestedResource().Memory, allocatable.Memory))
    }
    if allocatable.EphemeralStorage < podRequest.EphemeralStorage+nodeInfo.RequestedResource().EphemeralStorage {
        predicateFails = append(predicateFails, NewInsufficientResourceError(v1.ResourceEphemeralStorage, podRequest.EphemeralStorage, nodeInfo.RequestedResource().EphemeralStorage, allocatable.EphemeralStorage))
    }

    // 判断扩展的资源 比如利用device_plugin注册的资源
    for rName, rQuant := range podRequest.ScalarResources {
        if v1helper.IsExtendedResourceName(rName) {
            // If this resource is one of the extended resources that should be
            // ignored, we will skip checking it.
            if ignoredExtendedResources.Has(string(rName)) {
                continue
            }
        }
        if allocatable.ScalarResources[rName] < rQuant+nodeInfo.RequestedResource().ScalarResources[rName] {
            predicateFails = append(predicateFails, NewInsufficientResourceError(rName, podRequest.ScalarResources[rName], nodeInfo.RequestedResource().ScalarResources[rName], allocatable.ScalarResources[rName]))
        }
    }

    if klog.V(10) {
        if len(predicateFails) == 0 {
            // We explicitly don't do klog.V(10).Infof() to avoid computing all the parameters if this is
            // not logged. There is visible performance gain from it.
            klog.Infof("Schedule Pod %+v on Node %+v is allowed, Node is running only %v out of %v Pods.",
                podName(pod), node.Name, len(nodeInfo.Pods()), allowedPodNumber)
        }
    }
    return len(predicateFails) == 0, predicateFails, nil
}

3.3 PodFitsHost (HostNamePred = "HostName")

判断pod.Spec.NodeName是否匹配当前节点名称.
如果pod.Spec.NodeName为空或者等于当前节点名称, 就返回true.
否则返回false.

func PodFitsHost(pod *v1.Pod, meta algorithm.PredicateMetadata, nodeInfo *schedulercache.NodeInfo) (bool, []algorithm.PredicateFailureReason, error) {
    if len(pod.Spec.NodeName) == 0 {
        return true, nil, nil
    }
    node := nodeInfo.Node()
    if node == nil {
        return false, nil, fmt.Errorf("node not found")
    }
    if pod.Spec.NodeName == node.Name {
        return true, nil, nil
    }
    return false, []algorithm.PredicateFailureReason{ErrPodNotMatchHostName}, nil
}

4. 总结

简单介绍了几个常见的预选方法PodFitsHostPorts, PodFitsResourcesHostName. 主要是为了能理解预选方法的工作性质是什么即可.

你可能感兴趣的:([k8s源码分析][kube-scheduler]scheduler/algorithm之预选predicate)