云计算虚拟化:k8s进阶-CRD开发基础

一. 前言

趁假期空闲,把以前的学习笔记整理下,梳理下知识,本系列会有三篇,一篇基础,两篇自定义开发&部署。

1.1 CRD定义
Custom Resources Definition,即在Kubernetes 中添加一个和 Pod、service 类似的、新的 API 资源类型,用于统一部署/编排多个内置K8S资源(pod,service等),熟练掌握 CRD 是成为 Kubernetes 高级玩家的必备技能。

1.2 为什么需要CRD?
helm也可以做到统一部署/编排deployment,service,ingress,但它缺乏对资源的全生命周期的监控,CRD通过apiserver接口,在etcd中注册一种新的资源类型,此后就可以创建对应的资源对象&并监控它们的状态&执行相关动作,比如,你可以定义一个MYSQL的 CRD完成MYSQL集群项目的全部pod&service创建&监控功能。

1.3 CRD实现方法
CRD的实现工具&框架有kubebuilder & operator sdk,后者在向前者融合,建议使用kubebuilder。

1.4 网络相关资源
这篇理论讲的非常好:https://www.cnblogs.com/alisystemsoftware/p/11580202.html
这篇理论兼示例(场景比较实用,虽然有点复杂^_^):https://blog.upweto.top/gitbooks/kubebuilder/%E8%AE%A4%E8%AF%86Kubebuilder.html
官网资料:https://book.kubebuilder.io/
https://cloudnative.to/kubebuilder/(中文)
https://book-v1.book.kubebuilder.io/(老版本)

网络上通俗易懂,适合初学者的例子非常少,就像官网所说:

…太多的教程一开始都是以一些非常复杂的设置开始,或者是实现一些玩具应用,让你了解了基本的知识,然后又在更复杂的东西上停滞不前…

希望本系列能避免官网提到的问题,达到简单,实用的目标。

二. 必备知识

CRD开发涉及的知识点还是比较多的,最好具备以下知识:
2.1 golang基础及部分高级内容,熟悉常用基础包如cobra,flag,log,channel,context,exception,interface,reflect…
2.2 docker基础;
2.3 k8s基础;

三. 开发前准备

3.1 k8s集群环境搭建
参见另一篇博文https://blog.csdn.net/dustzhu/article/details/108527676或其它网络资料。
3.2 开发环境搭建
3.2.1 安装gcc

yum -y install gcc

3.2.2 go语言安装包
官网:https://golang.google.cn/dl/

tar -C /usr/local -xzf go1.xx.xx.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
export GO111MODULE=on 

3.2.3 kubebuilder

os=$(go env GOOS) 
arch=$(go env GOARCH)
curl -L https://go.kubebuilder.io/dl/2.3.1/${os}/${arch} | tar -xz -C /tmp/
mv /tmp/kubebuilder_2.3.1_${os}_${arch} /usr/local/kubebuilder
export PATH=$PATH:/usr/local/kubebuilder/bin

3.2.4 设定国内代理模块仓库

export GOPROXY=https://goproxy.cn/

3.2.5 安装kustomize

curl -s "https://raw.githubusercontent.com/\
k
kubernetes-sigs/kustomize/master/hack/install_kustomize.sh"  | bash  #或者把shell下载下来手动执行

四. CRD开发&部署步骤

4.1 项目初始化
[root@k8s01 home]# mkdir mydemo
[root@k8s01 mydemo]# go mod init mydemo
go: creating new go.mod: module mydemo
[root@k8s01 mydemo]# kubebuilder init --domain ips.com.cn
Writing scaffold for you to edit…
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
Update go.mod:
$ go mod tidy
Running make:
$ make
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen object:headerFile=“hack/boilerplate.go.txt” paths="./…"
go fmt ./…
go vet ./…
go build -o bin/manager main.go
Next: define a resource with:
$ kubebuilder create api
4.2 创建API
[root@k8s01 mydemo]# kubebuilder create api --group mygroup --version v1 --kind Mykind
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing scaffold for you to edit…
api/v1/mykind_types.go
controllers/mykind_controller.go
Running make:
$ make
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen object:headerFile=“hack/boilerplate.go.txt” paths="./…"
go fmt ./…
go vet ./…
go build -o bin/manager main.go
[root@k8s01 mydemo]#
[root@k8s01 mydemo]# vi main.go #optional,change 8080 to 8081,如果没有端口冲突,请跳过!
4.3 修改产生的API代码
这是CRD开发的重点,主要涉及控制器&TYPE的golang代码,本文仅是基础流程介绍,不做任何代码修改。
4.4 安装CRD
这步的主要作用是把CRD对象安装到ETCD数据库中,主要原理是根据api/v1/mykind_type.go的MykindSpec数据结构创建base下面的yaml文件,并据此创建CRD。
[root@k8s01 mydemo]# make install
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen “crd:trivialVersions=true” rbac:roleName=manager-role webhook paths="./…" output:crd:artifacts:config=config/crd/bases
kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/mykinds.mygroup.ips.com.cn created

[root@k8s01 mydemo]# make run #optional,适用于集群本地测试运行;如果部署到集群,请跳过!

查看安装的CRD对象
[root@k8s01 mydemo]# kubectl get crd
NAME CREATED AT
mykinds.mygroup.ips.com.cn 2020-11-09T05:43:15Z
[root@k8s01 mydemo]# kubectl describe crd

Name:         mykinds.mygroup.ips.com.cn
Namespace:    
Labels:       >
Annotations:  controller-gen.kubebuilder.io/version: v0.2.5
API Version:  apiextensions.k8s.io/v1
Kind:         CustomResourceDefinition
Metadata:
  ......忽略
Spec:
  Conversion:
    Strategy:  None
  Group:       mygroup.ips.com.cn
  Names:
    Kind:                   Mykind
    List Kind:              MykindList
    Plural:                 mykinds
    Singular:               mykind
  Preserve Unknown Fields:  true
  Scope:                    Namespaced
  Versions:
    Name:  v1
    Schema:
      openAPIV3Schema:
        Description:  Mykind is the Schema for the mykinds API
        Properties:
          API Version:
            Description:  APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
            Type:         string
          Kind:
            Description:  Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
            Type:         string
          Metadata:
            Type:  object
          Spec:
            Description:  MykindSpec defines the desired state of Mykind
            Properties:
              Foo:
                Description:  Foo is an example field of Mykind. Edit Mykind_types.go to remove/update
                Type:         string
            Type:             object
          Status:
            Description:  MykindStatus defines the observed state of Mykind
            Type:         object
        Type:             object
    Served:               true
    Storage:              true
Status:
  Accepted Names:
    Kind:       Mykind
    List Kind:  MykindList
    Plural:     mykinds
    Singular:   mykind
  Conditions:
    ......忽略
  Stored Versions:
    v1
Events:  >

4.5 创建CRD的实例
[root@k8s01 mydemo]# kubectl apply -f config/samples/
查看创建的CRD实例
[root@k8s01 mydemo]# kubectl get mykind -A
NAME SPACE NAME AGE
default mykind-sample 10s
[root@k8s01 mydemo]# kubectl describe mykind

Name:         mykind-sample
Namespace:    default
Labels:       >
Annotations:  API Version:  mygroup.ips.com.cn/v1
Kind:         Mykind
Metadata:
  ......忽略
Spec:
  Foo:   bar
Events:  >

4.6 部署CRD
4.6.1 修改配置文件
由于默认生成Dockerfile&yaml存在一些问题,需要针对具体环境做相应修改。
[root@k8s01 mydemo]# vi Dockerfile
# Build the manager binary
FROM golang:1.13 as builder

WORKDIR /workspace
ENV GOPROXY https://goproxy.cn/
ENV GO111MODULE on


#FROM gcr.io/distroless/static:nonroot
FROM registry.cn-hangzhou.aliyuncs.com/byteforce/distroless:nonroot

[root@k8s01 mydemo]# vi config/default/manager_auth_proxy_patch.yaml

#image: gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
image: registry.cn-hangzhou.aliyuncs.com/hsc/kube-rbac-proxy

4.6.2 部署
[root@k8s01 mydemo]# make docker-build docker-push IMG=registry.ips.com.cn/demo/mycontroller:v0.1.0
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen object:headerFile=“hack/boilerplate.go.txt” paths="./…"
go fmt ./…
go vet ./…
/root/go/bin/controller-gen “crd:trivialVersions=true” rbac:roleName=manager-role webhook paths="./…" output:crd:artifacts:config=config/crd/bases
go test ./… -coverprofile cover.out
? mydemo [no test files]
? mydemo/api/v1 [no test files]
ok mydemo/controllers 4.649s coverage: 0.0% of statements
docker build . -t registry.ips.com.cn/demo/mycontroller:v0.1.0
Sending build context to Docker daemon 40.34MB
Step 1/16 : FROM golang:1.13 as builder

Successfully built f691c20c5f70
Successfully tagged registry.ips.com.cn/demo/mycontroller:v0.1.0
docker push registry.ips.com.cn/demo/mycontroller:v0.1.0
The push refers to repository [registry.ips.com.cn/demo/mycontroller]
Get https://registry.ips.com.cn/v2/: dial tcp: lookup registry.ips.com.cn on 114.114.114.114:53: no such host
make: *** [docker-push] Error 1

解决方法:如果无法连接镜像仓库,可以导出导入镜像到对应节点
[root@k8s01 mydemo]# cd …
[root@k8s01 home]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.ips.com.cn/demo/mycontroller v0.1.0 f691c20c5f70 41 seconds ago 43.9MB

[root@k8s01 home]# docker save -o mycon.tar registry.ips.com.cn/demo/mycontroller:v0.1.0
[root@k8s01 home]# scp mycon.tar k8s02:/home/
[root@k8s01 home]# scp mycon.tar k8s03:/home/
[root@k8s02 home]# docker load -i mycon.tar
[root@k8s03 home]# docker load -i mycon.tar
[root@k8s01 mydemo]# make deploy IMG=registry.ips.com.cn/demo/mycontroller:v0.1.0
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen “crd:trivialVersions=true” rbac:roleName=manager-role webhook paths="./…" output:crd:artifacts:config=config/crd/bases
cd config/manager && kustomize edit set image controller=registry.ips.com.cn/demo/mycontroller:v0.1.0
kustomize build config/default | kubectl apply -f -
namespace/mydemo-system created
customresourcedefinition.apiextensions.k8s.io/mykinds.mygroup.ips.com.cn configured
role.rbac.authorization.k8s.io/mydemo-leader-election-role created
clusterrole.rbac.authorization.k8s.io/mydemo-manager-role created
clusterrole.rbac.authorization.k8s.io/mydemo-proxy-role created
clusterrole.rbac.authorization.k8s.io/mydemo-metrics-reader created
rolebinding.rbac.authorization.k8s.io/mydemo-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/mydemo-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/mydemo-proxy-rolebinding created
service/mydemo-controller-manager-metrics-service created
deployment.apps/mydemo-controller-manager created
查看部署的对象
[root@k8s01 mydemo]# kubectl get deploy -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
mydemo-system mydemo-controller-manager 1/1 1 1 31s
[root@k8s01 mydemo]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kube-flannel-ds-rp4t6 1/1 Running 0 20d
kube-system kube-flannel-ds-vqmqs 1/1 Running 0 20d
mydemo-system mydemo-controller-manager-59cf997ccd-rtpkg 2/2 Running 1 40s
[root@k8s01 mydemo]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 196.169.0.1 443/TCP 20d
mydemo-system mydemo-controller-manager-metrics-service ClusterIP 196.169.36.71 8443/TCP 48s

五. 总结

本文主要阐述kubebuilder默认产生的CRD的整个流程,给初学者一个整体概念,方便后续深入学习,最后用一张图总结CRD开发部署流程,如下:
云计算虚拟化:k8s进阶-CRD开发基础_第1张图片

你可能感兴趣的:(云计算&虚拟化,k8s,golang)