如何在k8s中进行master-slave切换

背景

在k8s上部署的服务,如果不借助外部的etcd,如何做服务的高可用部署,也就是部署主备多套实例来进行服务failover切换。

pilot实现

基于istio-1.6.12

pilot leader选举部分代码位于:istio/pilot/pkg/leaderelection/leaderelection.go。主要是使用了k8s.io/client-go中的leaderelection库来实现leader选举,代码量很少,说明k8s将leader选举流程封装的很好,不需要用户做太多额外的工作。pilot部分的leader选举代码只是整个选举流程的冰山一角,更具体的如何在k8s中进行leader选举还是要进入到k8s的代码库。

type LeaderElection struct {
    namespace string
    name      string
    runFns    []func(stop <-chan struct{}) // 成为leader后调用的函数,pilot主要工作流程
    client    kubernetes.Interface
    ttl       time.Duration // lease超时时间

    // Records which "cycle" the election is on. This is incremented each time an election is won and then lost
    // This is mostly just for testing
    cycle      *atomic.Int32
    electionID string
}

LeaderElection是pilot定义的leader选举对象,对外暴露的方法有AddRunFunction用于向runFns添加成为leader后需要执行的流程。Run,开启选举流程。

Run

Run是pilot leader选举的核心流程:

// Run will start leader election, calling all runFns when we become the leader.
func (l *LeaderElection) Run(stop <-chan struct{}) {
    for {
        le, err := l.create()
        if err != nil {
            // This should never happen; errors are only from invalid input and the input is not user modifiable
            panic("leaderelection creation failed: " + err.Error())
        }
        l.cycle.Inc()
        ctx, cancel := context.WithCancel(context.Background())
        go func() {
            <-stop
            cancel()
        }()
        le.Run(ctx)
        select {
        case <-stop:
            // We were told to stop explicitly. Exit now
            return
        default:
            // Otherwise, we may have lost our lock. In practice, this is extremely rare; we need to have the lock, then lose it
            // Typically this means something went wrong, such as API server downtime, etc
            // If this does happen, we will start the cycle over again
            log.Errorf("Leader election cycle %v lost. Trying again", l.cycle.Load())
        }
    }
}

先看 LeaderElection.create函数

func (l *LeaderElection) create() (*leaderelection.LeaderElector, error) {
    callbacks := leaderelection.LeaderCallbacks{
        OnStartedLeading: func(ctx context.Context) {
            for _, f := range l.runFns {
                go f(ctx.Done())
            }
        },
        OnStoppedLeading: func() {
            log.Infof("leader election lock lost")

        },
    }
    lock := resourcelock.ConfigMapLock{
        ConfigMapMeta: metaV1.ObjectMeta{Namespace: l.namespace, Name: l.electionID},
        Client:        l.client.CoreV1(),
        LockConfig: resourcelock.ResourceLockConfig{
            Identity: l.name,
        },
    }
    return leaderelection.NewLeaderElector(leaderelection.LeaderElectionConfig{
        Lock:          &lock,
        LeaseDuration: l.ttl,
        RenewDeadline: l.ttl / 2,
        RetryPeriod:   l.ttl / 4,
        Callbacks:     callbacks,
        ReleaseOnCancel: false,
    })
}

leaderelection ("k8s.io/client-go/tools/leaderelection")

create逻辑比较简单,就是注册了两个的callback函数,分别用于在成为leader时和成follower时调用,callbacks用了k8s的封装,因为最终需要调用leaderelection.NewLeaderElector生成由k8s提供的leaderElector,传入的参数包括callbacks以及一个lock,lock的具体作用后续分析。

我们再来看看leaderElector.Run具体的执行流程:

// Run starts the leader election loop
func (le *LeaderElector) Run(ctx context.Context) {
    defer func() {
        runtime.HandleCrash()
        le.config.Callbacks.OnStoppedLeading()
    }()
    if !le.acquire(ctx) {
        return // ctx signalled done
    }
    ctx, cancel := context.WithCancel(ctx)
    defer cancel()
    go le.config.Callbacks.OnStartedLeading(ctx)
    le.renew(ctx)
}

因为defer中会调用Callbacks.OnStoppedLeading,切换为follower的工作状态,因此Run在leader状态会阻塞运行。LeaderElector通过acquire调用直到获取leader lease或者失败。

// acquire loops calling tryAcquireOrRenew and returns true immediately when tryAcquireOrRenew succeeds.
// Returns false if ctx signals done.
func (le *LeaderElector) acquire(ctx context.Context) bool {
    ctx, cancel := context.WithCancel(ctx)
    defer cancel()
    succeeded := false
    desc := le.config.Lock.Describe()
    klog.Infof("attempting to acquire leader lease  %v...", desc)
    wait.JitterUntil(func() {
        succeeded = le.tryAcquireOrRenew(ctx)
        le.maybeReportTransition()
        if !succeeded {
            klog.V(4).Infof("failed to acquire lease %v", desc)
            return
        }
        le.config.Lock.RecordEvent("became leader")
        le.metrics.leaderOn(le.config.Name)
        klog.Infof("successfully acquired lease %v", desc)
        cancel()
    }, le.config.RetryPeriod, JitterFactor, true, ctx.Done())
    return succeeded
}

wait.JitterUntil中会循环调用tryAcquireOrRenew获取leader lease:

// tryAcquireOrRenew tries to acquire a leader lease if it is not already acquired,
// else it tries to renew the lease if it has already been acquired. Returns true
// on success else returns false.
func (le *LeaderElector) tryAcquireOrRenew(ctx context.Context) bool {
    now := metav1.Now()
    leaderElectionRecord := rl.LeaderElectionRecord{
        HolderIdentity:       le.config.Lock.Identity(),
        LeaseDurationSeconds: int(le.config.LeaseDuration / time.Second),
        RenewTime:            now,
        AcquireTime:          now,
    }

    // 1. obtain or create the ElectionRecord
    oldLeaderElectionRecord, oldLeaderElectionRawRecord, err := le.config.Lock.Get(ctx)
    if err != nil {
        if !errors.IsNotFound(err) {
            klog.Errorf("error retrieving resource lock %v: %v", le.config.Lock.Describe(), err)
            return false
        }
        if err = le.config.Lock.Create(ctx, leaderElectionRecord); err != nil {
            klog.Errorf("error initially creating leader election record: %v", err)
            return false
        }
        le.observedRecord = leaderElectionRecord
        le.observedTime = le.clock.Now()
        return true
    }

    // 2. Record obtained, check the Identity & Time
    if !bytes.Equal(le.observedRawRecord, oldLeaderElectionRawRecord) {
        le.observedRecord = *oldLeaderElectionRecord
        le.observedRawRecord = oldLeaderElectionRawRecord
        le.observedTime = le.clock.Now()
    }
    if len(oldLeaderElectionRecord.HolderIdentity) > 0 &&
        le.observedTime.Add(le.config.LeaseDuration).After(now.Time) &&
        !le.IsLeader() {
        klog.V(4).Infof("lock is held by %v and has not yet expired", oldLeaderElectionRecord.HolderIdentity)
        return false
    }

    // 3. We're going to try to update. The leaderElectionRecord is set to it's default
    // here. Let's correct it before updating.
    if le.IsLeader() {
        leaderElectionRecord.AcquireTime = oldLeaderElectionRecord.AcquireTime
        leaderElectionRecord.LeaderTransitions = oldLeaderElectionRecord.LeaderTransitions
    } else {
        leaderElectionRecord.LeaderTransitions = oldLeaderElectionRecord.LeaderTransitions + 1
    }

    // update the lock itself
    if err = le.config.Lock.Update(ctx, leaderElectionRecord); err != nil {
        klog.Errorf("Failed to update lock: %v", err)
        return false
    }

    le.observedRecord = leaderElectionRecord
    le.observedTime = le.clock.Now()
    return true
}

tryAcquireOrRenew的工作就是获取或者刷新leader lease。还记得前面leaderelection.NewLeaderElector创建新的LeaderElector传入的lock:

    lock := resourcelock.ConfigMapLock{
        ConfigMapMeta: metaV1.ObjectMeta{Namespace: l.namespace, Name: l.electionID},
        Client:        l.client.CoreV1(),
        LockConfig: resourcelock.ResourceLockConfig{
            Identity: l.name,
        },
    }

是一个ConfigMapLock,我们找到对应的定义:

type ConfigMapLock struct {
    // ConfigMapMeta should contain a Name and a Namespace of a
    // ConfigMapMeta object that the LeaderElector will attempt to lead.
    ConfigMapMeta metav1.ObjectMeta
    Client        corev1client.ConfigMapsGetter
    LockConfig    ResourceLockConfig
    cm            *v1.ConfigMap
}

ConfigMapLock包含了一个ConfigMap,tryAcquireOrRenew的工作流程可以总结为:

  • 尝试获取在k8s中创建的与leader lease相关的configmap资源
  • 如果不存在该资源,创建configmap
  • 将获取到记录保存到本地,并判断记录是否过期
  • 如果记录已经过期,则尝试更新记录,更新成功后自己成为leader

因此在k8s中leader的选举的原理如下:
多个节点同时竞争写入同一个configmap,写入成功的成为leader,并定时刷新configmap的时间戳,只要configmap的时间戳不过期,leader节点就可以保持leader状态。

你可能感兴趣的:(如何在k8s中进行master-slave切换)