kubebuilder创建crd分析

问题备忘:

编号 问题 解决
0 kubebuiler的代码对调用crd的过程进行了比较多的封装,比如controller包里的Reconcile代码,具体是哪里调用到的? 在controller里实现的Reconcile方法
1 关于函数createStructuredListWatch和结构体informersByGVK又是什么时候执行和实现的呢

正文:

关于kubebuilder的使用可以参考官方文档:
https://book.kubebuilder.io/quick-start.html

本文不对kubebuidler的使用做分析,主要对kubebuiler的生成的代码进行分析。kubebuiler的代码对调用crd的过程进行了比较多的封装,需要对内部逻辑更好的了解,才能更好的进行crd开发。
话不多说,直接看代码:

### main.go
var (
    scheme   = runtime.NewScheme()    // 定义出一个空的scheme结构体
    setupLog = ctrl.Log.WithName("setup")
)

func init() {
    fmt.Printf("get scheme0: %v", scheme)  // &{map[] map[] map[] map[] map[] map[] 0xc0000b2660 map[] [] pkg/runtime/scheme.go:101}
    _ = clientgoscheme.AddToScheme(scheme) // 注册client-go里的预定义的结构体
    fmt.Printf("get scheme1: %v", scheme)
    _ = appsv1alpha1.AddToScheme(scheme) // 注册本crd的scheme信息,eg: {apps.devops.xxx.com v1alpha1 WeApp}
    // 但是此时并没有注册crd的结构体信息
    fmt.Printf("get scheme2: %v", scheme)
    // +kubebuilder:scaffold:scheme
}

crd的gvk信息注册之后,进入main函数

### main.go
func main() {

    var metricsAddr string        // 定义metricsAddr
    var enableLeaderElection bool // 用于组件的高可用,部署多个副本,通过选主来保证只有一个副本运行。选主的原理就是通过获得ETCD中的分布式锁成为leader,其他副本不断的去抢占这个锁,leader挂掉就会被其他副本抢占。
  ...

////1、NewManager returns a new Manager for creating Controllers.
// 这里生成一个通用的manager, 此时还没有传入自定义controller的逻辑
// ctrl.GetConfigOrDie() 获取本地的kubeconfig配置,要是没有获取到就退出
    mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ 
        Scheme:             scheme,
        MetricsBindAddress: metricsAddr,
        Port:               9443,
        LeaderElection:     enableLeaderElection,
        LeaderElectionID:   "8bf23ea1.xxx.xxx.com",  // 应该是分布式锁的id
    })

//// 2、SetupWithManager用于注册controller逻辑到manager(struct controllerManager),放在leaderElectionRunnables或者是nonLeaderElectionRunnables字段里面,用于后面start的时候启动[]Runnable
    if err = (&controllers.WeDoctorAppReconciler{
        Client: mgr.GetClient(),
        Log:    ctrl.Log.WithName("controllers").WithName("MyApp"),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr); err != nil {
        setupLog.Error(err, "unable to create controller", "controller", "WeDoctorApp")
        os.Exit(1)
    }
    // +kubebuilder:scaffold:builder

//// 3、mgr.Start 启动mgr里的[]Runnable,最终调用到每个controller的start方法
// SetupSignalHandler设置退出信号
    setupLog.Info("starting manager")
    if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
        setupLog.Error(err, "problem running manager")
        os.Exit(1)
    }
}

拆分出来几个关键的步骤一一进行分析:

1、生成一个manager用于注册和启动controller
ctrl.NewManager()

2、注册controller到manager
(&controllers.WeDoctorAppReconciler{
        Client: mgr.GetClient(),
        Log:    ctrl.Log.WithName("controllers").WithName("WeDoctorApp"),
        Scheme: mgr.GetScheme(),
    }).SetupWithManager(mgr);

3、启动controller
mgr.Start(ctrl.SetupSignalHandler())

1、ctrl.NewManager()

// New returns a new Manager for creating Controllers.
func New(config *rest.Config, options Options) (Manager, error) {
    // Initialize a rest.config if none was specified
    if config == nil {
        return nil, fmt.Errorf("must specify Config")
    }
    // Set default values for options fields
    options = setOptionsDefaults(options)  // 设置各种默认的参数
    // Create the mapper provider
    mapper, err := options.MapperProvider(config)  // gvk,gvr等之间的转换map
    // Create the cache for the cached read client and registering informers
    cache, err := options.NewCache(config, cache.Options{Scheme: options.Scheme, Mapper: mapper, Resync: options.SyncPeriod, Namespace: options.Namespace})  
// cache主要是为了创建informers的map关系,对每个gvk都创建了相应的informer,通过 informersByGVK 这个 map 做 GVK 到 Informer 的映射,每个 Informer 会根据 ListWatch 函数对对应的 GVK 进行 List 和 Watch。

    apiReader, err := client.New(config, client.Options{Scheme: options.Scheme, Mapper: mapper})
// 读client
    writeObj, err := options.NewClient(cache, config, client.Options{Scheme: options.Scheme, Mapper: mapper})
// 写client

...

    return &controllerManager{
        config:                config,
        scheme:                options.Scheme,
        cache:                 cache,
        fieldIndexes:          cache,
        client:                writeObj,
        apiReader:             apiReader,
        recorderProvider:      recorderProvider,
        resourceLock:          resourceLock,
        mapper:                mapper,
        metricsListener:       metricsListener,
        internalStop:          stop,
        internalStopper:       stop,
        port:                  options.Port,
        host:                  options.Host,
        certDir:               options.CertDir,
        leaseDuration:         *options.LeaseDuration,
        renewDeadline:         *options.RenewDeadline,
        retryPeriod:           *options.RetryPeriod,
        healthProbeListener:   healthProbeListener,
        readinessEndpointName: options.ReadinessEndpointName,
        livenessEndpointName:  options.LivenessEndpointName,
    }, nil
}

NewCache的具体实现是在setOptionsDefaults中定义的:

// New initializes and returns a new Cache.
func New(config *rest.Config, opts Options) (Cache, error) {
    opts, err := defaultOpts(config, opts)
    if err != nil {
        return nil, err
    }
    im := internal.NewInformersMap(config, opts.Scheme, opts.Mapper, *opts.Resync, opts.Namespace)
    return &informerCache{InformersMap: im}, nil
}

// NewInformersMap creates a new InformersMap that can create informers for
// both structured and unstructured objects.
func NewInformersMap(config *rest.Config,
    scheme *runtime.Scheme,
    mapper meta.RESTMapper,
    resync time.Duration,
    namespace string) *InformersMap {

    return &InformersMap{
        structured:   newStructuredInformersMap(config, scheme, mapper, resync, namespace),
        unstructured: newUnstructuredInformersMap(config, scheme, mapper, resync, namespace),

        Scheme: scheme,
    }
}

// newStructuredInformersMap creates a new InformersMap for structured objects.
func newStructuredInformersMap(config *rest.Config, scheme *runtime.Scheme, mapper meta.RESTMapper, resync time.Duration, namespace string) *specificInformersMap {
    return newSpecificInformersMap(config, scheme, mapper, resync, namespace, createStructuredListWatch)
}

// newSpecificInformersMap returns a new specificInformersMap (like
// the generical InformersMap, except that it doesn't implement WaitForCacheSync).
func newSpecificInformersMap(config *rest.Config,
    scheme *runtime.Scheme,
    mapper meta.RESTMapper,
    resync time.Duration,
    namespace string,
    createListWatcher createListWatcherFunc) *specificInformersMap {
    ip := &specificInformersMap{
        config:            config,
        Scheme:            scheme,
        mapper:            mapper,
        informersByGVK:    make(map[schema.GroupVersionKind]*MapEntry),
        codecs:            serializer.NewCodecFactory(scheme),
        paramCodec:        runtime.NewParameterCodec(scheme),
        resync:            resync,
        startWait:         make(chan struct{}),
        createListWatcher: createListWatcher,
        namespace:         namespace,
    }
    return ip
}

// newListWatch returns a new ListWatch object that can be used to create a SharedIndexInformer.
func createStructuredListWatch(gvk schema.GroupVersionKind, ip *specificInformersMap) (*cache.ListWatch, error) {
    // Kubernetes APIs work against Resources, not GroupVersionKinds.  Map the
    // groupVersionKind to the Resource API we will use.
    mapping, err := ip.mapper.RESTMapping(gvk.GroupKind(), gvk.Version)
    if err != nil {
        return nil, err
    }

    client, err := apiutil.RESTClientForGVK(gvk, ip.config, ip.codecs)
    if err != nil {
        return nil, err
    }
    listGVK := gvk.GroupVersion().WithKind(gvk.Kind + "List")
    listObj, err := ip.Scheme.New(listGVK)
    if err != nil {
        return nil, err
    }

    // Create a new ListWatch for the obj
    return &cache.ListWatch{
        ListFunc: func(opts metav1.ListOptions) (runtime.Object, error) {
            res := listObj.DeepCopyObject()
            isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot
            err := client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Do().Into(res)
            return res, err
        },
        // Setup the watch function
        WatchFunc: func(opts metav1.ListOptions) (watch.Interface, error) {
            // Watch needs to be set to true separately
            opts.Watch = true
            isNamespaceScoped := ip.namespace != "" && mapping.Scope.Name() != meta.RESTScopeNameRoot
            return client.Get().NamespaceIfScoped(ip.namespace, isNamespaceScoped).Resource(mapping.Resource.Resource).VersionedParams(&opts, ip.paramCodec).Watch()
        },
    }, nil
}

NewCache就是为了定义出gvk到informer之间的映射关系。
问题:关于函数createStructuredListWatch和结构体informersByGVK又是什么时候执行和实现的呢???

2、SetupWithManager

SetupWithManager用于给manager注册controllers。

func (r *WeDoctorAppReconciler) SetupWithManager(mgr ctrl.Manager) error {
    return ctrl.NewControllerManagedBy(mgr).
        For(&appsv1alpha1.WeDoctorApp{}).
        Complete(r)
}

这里用到了builer模式,NewControllerManagedBy用于生成builer结构体,for参数用于传参,除了for还有其他的一些参数eg:owns(表明当前资源拥有哪些附属资源),最后就是complete的构建过程。

// Builder builds a Controller.
type Builder struct {
    apiType        runtime.Object
    mgr            manager.Manager
    predicates     []predicate.Predicate
    managedObjects []runtime.Object
    watchRequest   []watchRequest
    config         *rest.Config
    ctrl           controller.Controller
    ctrlOptions    controller.Options
    name           string
}

// Build builds the Application ControllerManagedBy and returns the Controller it created.
func (blder *Builder) Build(r reconcile.Reconciler) (controller.Controller, error) {
    // Set the ControllerManagedBy
    if err := blder.doController(r); err != nil {
        return nil, err
    }
    // Set the Watch
    if err := blder.doWatch(); err != nil {
        return nil, err
    }
    return blder.ctrl, nil
}

主要有两个方法blder.doController(r)和 blder.doWatch()
(1)blder.doController(r)
通过mgr.Add(c)把controller加入到manager的[]Runner里。

// New returns a new Controller registered with the Manager.  The Manager will ensure that shared Caches have
// been synced before the Controller is Started.
func New(name string, mgr manager.Manager, options Options) (Controller, error) {
  ...
    // Create controller with dependencies set
    c := &controller.Controller{
        Do:       options.Reconciler,    // 传入了Do的
        Cache:    mgr.GetCache(),
        Config:   mgr.GetConfig(),
        Scheme:   mgr.GetScheme(),
        Client:   mgr.GetClient(),
        Recorder: mgr.GetEventRecorderFor(name),
        MakeQueue: func() workqueue.RateLimitingInterface {
            return workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), name)
        },
        MaxConcurrentReconciles: options.MaxConcurrentReconciles,
        Name:                    name,
    }
    // Add the controller as a Manager components
    return c, mgr.Add(c)   
}

// Start implements controller.Controller
func (c *Controller) Start(stop <-chan struct{}) error {
        // NB(directxman12): launch the sources *before* trying to wait for the
        // caches to sync so that they have a chance to register their intendeded
        // caches.
        for _, watch := range c.watches {
            log.Info("Starting EventSource", "controller", c.Name, "source", watch.src)
            if err := watch.src.Start(watch.handler, c.Queue, watch.predicates...); err != nil {
                return err
            }
        }
     ...
        // Launch workers to process resources
        log.Info("Starting workers", "controller", c.Name, "worker count", c.MaxConcurrentReconciles)
        for i := 0; i < c.MaxConcurrentReconciles; i++ {
            // Process work items
            go wait.Until(c.worker, c.JitterPeriod, stop)
        }
// 通过mgr.Start(ctrl.SetupSignalHandler())会调用到每个controller里的start方法,然后调用到c.Do.Reconcile(req),即为前面传入的options.Reconciler的Reconcile方法,options.Reconciler即为我们自己定义的WeDoctorAppReconciler
}

(2)blder.doWatch()
doWatch的逻辑就是把watch列表塞到c.watches中,最后在(c *Controller) Start方法里会去执行start方法进行watch的。

func (blder *Builder) doWatch() error {
    // Reconcile type
    src := &source.Kind{Type: blder.apiType}
    hdler := &handler.EnqueueRequestForObject{}

// 注册的 handler,可以看到 Kubebuidler 为我们注册的 Handler 就是将发生变更的对象的 NamespacedName 入队列,如果在 Reconcile 逻辑中需要判断创建/更新/删除,需要有自己的判断逻辑。

    err := blder.ctrl.Watch(src, hdler, blder.predicates...)
    if err != nil {
        return err
    }
    // Watches the managed types
    for _, obj := range blder.managedObjects {
        src := &source.Kind{Type: obj}
        hdler := &handler.EnqueueRequestForOwner{
            OwnerType:    blder.apiType,
            IsController: true,
        }
        if err := blder.ctrl.Watch(src, hdler, blder.predicates...); err != nil {
            return err
        }
    }
    // Do the watch requests
    for _, w := range blder.watchRequest {
        if err := blder.ctrl.Watch(w.src, w.eventhandler, blder.predicates...); err != nil {
            return err
        }
    }
    return nil
}

// Watch implements controller.Controller
func (c *Controller) Watch(src source.Source, evthdler handler.EventHandler, prct ...predicate.Predicate) error {
    c.mu.Lock()
    defer c.mu.Unlock()
...
    c.watches = append(c.watches, watchDescription{src: src, handler: evthdler, predicates: prct})
    if c.Started {        //最开始不是Started,所以不会去watch
        log.Info("Starting EventSource", "controller", c.Name, "source", src)
        return src.Start(evthdler, c.Queue, prct...)
    }
}

所以整体的逻辑就是外部写Reconciler的Reconcile方法,由内部的controller的watch方法来接收event事件,将事件塞入到queue队列里面。Controller的start方法就会去触发reconcileHandler,进一步调用到外部注入的Reconcile方法进行具体的逻辑处理。

3、mgr.Start(ctrl.SetupSignalHandler())

通过之前的分析,接下来的mgr.Start(ctrl.SetupSignalHandler())的逻辑还是比较简单的

func (cm *controllerManager) Start(stop <-chan struct{}) error {
    // join the passed-in stop channel as an upstream feeding into cm.internalStopper
    defer close(cm.internalStopper)

    // initialize this here so that we reset the signal channel state on every start
    cm.errSignal = &errSignaler{errSignal: make(chan struct{})}

    // Metrics should be served whether the controller is leader or not.
    // (If we don't serve metrics for non-leaders, prometheus will still scrape
    // the pod but will get a connection refused)
    if cm.metricsListener != nil {
        go cm.serveMetrics(cm.internalStop)
    }

    // Serve health probes
    if cm.healthProbeListener != nil {
        go cm.serveHealthProbes(cm.internalStop)
    }

    go cm.startNonLeaderElectionRunnables()

    if cm.resourceLock != nil {
        err := cm.startLeaderElection()
        if err != nil {
            return err
        }
    } else {
        go cm.startLeaderElectionRunnables()
    }

    select {
    case <-stop:
        // We are done
        return nil
    case <-cm.errSignal.GotError():
        // Error starting a controller
        return cm.errSignal.Error()
    }
}

就是执行controllerManager里面的runnables。

参考文章:
https://my.oschina.net/u/3874284/blog/3110372
https://blog.csdn.net/RA681t58CJxsgCkJ31/article/details/104386126/

你可能感兴趣的:(kubebuilder创建crd分析)