云计算虚拟化:k8s进阶-CRD项目示例

一.前言

上一篇介绍了CRD开发部署的整体流程,在这个基础前提下,通过一个简单的项目示例,进一步深入。

二.不使用CRD

如果不使用CRD,在K8S中部署一个简单的单节点MYSQL项目,我们只需编写两个yaml文件,apply一下即可完成;
deploy_mysql.yaml

apiVersion: apps/v1                             # apiserver的版本
kind: Deployment                                # 副本控制器deployment,管理pod和RS
metadata:                                       #
  name: mysql                                   # deployment的名称,全局唯一
spec:                                           #
  replicas: 1                                   # Pod副本期待数量
  selector:                                     #
    matchLabels:                                # 定义RS的标签
      app: mysql                                # 符合目标的Pod拥有此标签
  template:                                     # 根据此模板创建Pod的副本(实例)
    metadata:
      labels:
        app: mysql                              # Pod副本的标签,对应RS的Selector
    spec:
      containers:                               # <[]Object>Pod里容器的定义部分
        - name: mysql                           # 容器的名称
          image: mysql:5.7                      # 容器对应的docker镜像
          ports:                                #<[]Object>
            - containerPort: 3306               # 容器暴露的端口号
          env:                                  # 写入到容器内的环境容量
            - name: MYSQL_ROOT_PASSWORD         # 定义了一个mysql的root密码的变量
              value: "123456"

service_mysql.yaml

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  type: NodePort
  ports:
    - port: 3306
      nodePort: 30306
  selector:
    app: mysql

[root@k8s01 home]# kubectl apply -f deploy_mysql.yaml
[root@k8s01 home]# kubectl apply -f service_mysql.yaml

查看部署的资源:
============
[root@k8s01 home]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-gz6xp 1/1 Running 0 3m49s
[root@k8s01 home]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.160.0.1 443/TCP 4d18h
mysql NodePort 196.169.147.255 3306:30306/TCP 3s

从外部访问mysql:
==============
root@k8s03 home]# mysql -hk8s01 -P30306 -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 30
Server version: 8.0.21 MySQL Community Server - GPL

Copyright © 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

mysql>

三.使用CRD

在使用CRD的情况下,我们完全可以把所有的yaml关键要素整合到一个文件里,然后apply一下整合后的yaml文件,一键完成项目部署;以MYSQL为例,把deploy&service要素整合如下:
mycrd_mysql.yaml

apiVersion: mygroup.ips.com.cn/v1  # 这就是上一篇博文中我们自己定义的apiserver版本
kind: Mykind                       # 这就是上一篇博文中我们自己定义的crd kind
metadata:                       
  name: mykind-sample              # 这里我们自己定义mykind的名称,全局唯一
spec:                            #Spec部分就是我们整合deploy&service关键要素后需要自定义的元素
  replicas: 1                    #  Pod副本期待数量
  image: mysql:5.7               #  image版本   
  port: 3306                     #  mysql port
  nodeport: 30306                #  mysql port for outside
  env:                           #  写入到容器内的环境变量,此格式主要为了复用api中已有数据结构,,也可以分开定义
    - name: MYSQL_ROOT_PASSWORD  #  mysql的root密码的变量
      value: "123456"            #  密码内容

然后只要kubectl apply -f mycrd_mysql.yaml,即可通过我们部署的CRD资源(控制器),通过reconcile过程完成pod&service的创建。
CRD项目的关键就是创建一个能满足我们需求的控制器,然后部署到K8S集群中,上一篇已经通过kubebuilder搭建了一个项目框架,本文的主要内容就是完成一个能一键部署MYSQL需求的CRD控制器,接上篇已经搭建的框架,我们需要修改的地方如下:
Sourcecode:https://github.com/zhuxianglei/K8S-CRD-Demo
3.1 vi api/v1/mykind_type.go
云计算虚拟化:k8s进阶-CRD项目示例_第1张图片
3.2 vi controllers/mykind_controller.go

/*


Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package controllers

import (
	mygroupv1 "K8S-CRD-Demo/api/v1"
	"context"
	"fmt"
	"reflect"

	"github.com/go-logr/logr"
	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	"k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime"
	ctrl "sigs.k8s.io/controller-runtime"
	"sigs.k8s.io/controller-runtime/pkg/client"
	"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
	"sigs.k8s.io/controller-runtime/pkg/reconcile"
)

// create a new deploy object
func NewDeploy(owner *mygroupv1.Mykind, logger logr.Logger, scheme *runtime.Scheme) *appsv1.Deployment {
	labels := map[string]string{"app": owner.Name}
	selector := &metav1.LabelSelector{MatchLabels: labels}
	deploy := &appsv1.Deployment{
		TypeMeta: metav1.TypeMeta{
			APIVersion: "apps/v1",
			Kind:       "Deployment",
		},
		ObjectMeta: metav1.ObjectMeta{
			Name:      owner.Name,
			Namespace: owner.Namespace,
		},
		Spec: appsv1.DeploymentSpec{
			Replicas: owner.Spec.Replicas,
			Template: corev1.PodTemplateSpec{
				ObjectMeta: metav1.ObjectMeta{
					Labels: labels,
				},
				Spec: corev1.PodSpec{
					Containers: []corev1.Container{
						{
							Name:            owner.Name,
							Image:           owner.Spec.Image,
							Ports:           []corev1.ContainerPort{{ContainerPort: owner.Spec.Port}},
							ImagePullPolicy: corev1.PullIfNotPresent,
							Env:             owner.Spec.Envs,
						},
					},
				},
			},
			Selector: selector,
		},
	}
	// add ControllerReference for deployment
	if err := controllerutil.SetControllerReference(owner, deploy, scheme); err != nil {
		msg := fmt.Sprintf("***SetControllerReference for Deployment %s/%s failed!***", owner.Namespace, owner.Name)
		logger.Error(err, msg)
	}
	return deploy
}

// create a new service object
func NewService(owner *mygroupv1.Mykind, logger logr.Logger, scheme *runtime.Scheme) *corev1.Service {
	srv := &corev1.Service{
		TypeMeta: metav1.TypeMeta{
			Kind:       "Service",
			APIVersion: "v1",
		},
		ObjectMeta: metav1.ObjectMeta{
			Name:      owner.Name,
			Namespace: owner.Namespace,
		},
		Spec: corev1.ServiceSpec{
			Ports: []corev1.ServicePort{{Port: owner.Spec.Port, NodePort: owner.Spec.Nodeport}},
			Selector: map[string]string{
				"app": owner.Name,
			},
			Type: corev1.ServiceTypeNodePort,
		},
	}
	// add ControllerReference for service
	if err := controllerutil.SetControllerReference(owner, srv, scheme); err != nil {
		msg := fmt.Sprintf("***setcontrollerReference for Service %s/%s failed!***", owner.Namespace, owner.Name)
		logger.Error(err, msg)
	}
	return srv
}

// MykindReconciler reconciles a Mykind object
type MykindReconciler struct {
	client.Client
	Log    logr.Logger
	Scheme *runtime.Scheme
}

// +kubebuilder:rbac:groups=mygroup.ips.com.cn,resources=mykinds,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=mygroup.ips.com.cn,resources=mykinds/status,verbs=get;update;patch
func (r *MykindReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
	fmt.Println("---start Reconcile---")
	ctx := context.Background()
	lgr := r.Log.WithValues("mykind", req.NamespacedName)

	// your logic here
	/*1. create/update deploy
	  ========================*/
	mycrd_instance := &mygroupv1.Mykind{}
	if err := r.Get(ctx, req.NamespacedName, mycrd_instance); err != nil {
		lgr.Error(err, "***Get crd instance failed(maybe be deleted)! please check!***")
		return reconcile.Result{}, err
	}
	/*if mycrd_instance.DeletionTimestamp != nil {
		lgr.Info("---Deleting crd instance,cleanup subresources---")
		return reconcile.Result{}, nil
	}*/
	oldDeploy := &appsv1.Deployment{}
	newDeploy := NewDeploy(mycrd_instance, lgr, r.Scheme)
	if err := r.Get(ctx, req.NamespacedName, oldDeploy); err != nil && errors.IsNotFound(err) {
		lgr.Info("---Creating deploy---")
		// 1. create Deploy
		if err := r.Create(ctx, newDeploy); err != nil {
			lgr.Error(err, "***create deploy failed!***")
			return reconcile.Result{}, err
		}
		lgr.Info("---Create deploy done---")
	} else {
		if !reflect.DeepEqual(oldDeploy.Spec, newDeploy.Spec) {
			lgr.Info("---Updating deploy---")
			oldDeploy.Spec = newDeploy.Spec
			if err := r.Update(ctx, oldDeploy); err != nil {
				lgr.Error(err, "***Update old deploy failed!***")
				return reconcile.Result{}, err
			}
			lgr.Info("---Update deploy done---")
		}
	}
	/*2. create/update Service
	  =========================*/
	oldService := &corev1.Service{}
	newService := NewService(mycrd_instance, lgr, r.Scheme)
	if err := r.Get(ctx, req.NamespacedName, oldService); err != nil && errors.IsNotFound(err) {
		lgr.Info("---Creating service---")
		if err := r.Create(ctx, newService); err != nil {
			lgr.Error(err, "***Create service failed!***")
			return reconcile.Result{}, err
		}
		lgr.Info("---Create service done---")
		return reconcile.Result{}, nil
	} else {
		if !reflect.DeepEqual(oldService.Spec, newService.Spec) {
			lgr.Info("---Updating service---")
			clstip := oldService.Spec.ClusterIP //!!!clusterip unable be changed!!!
			oldService.Spec = newService.Spec
			oldService.Spec.ClusterIP = clstip
			if err := r.Update(ctx, oldService); err != nil {
				lgr.Error(err, "***Update service failed!***")
				return reconcile.Result{}, err
			}
			lgr.Info("---Update service done---")
			return reconcile.Result{}, nil
		}
	}
	lgr.Info("!!!err from Get maybe is nil,please check!!!")
	//end your logic
	return ctrl.Result{}, nil
}

func (r *MykindReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&mygroupv1.Mykind{}).
		Complete(r)
}

3.3 make install
[root@k8s01 K8S-CRD-Demo]# make install
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen “crd:trivialVersions=true” rbac:roleName=manager-role webhook paths="./…" output:crd:artifacts:config=config/crd/bases
kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/mykinds.mygroup.ips.com.cn created
3.4 make run
[root@k8s01 K8S-CRD-Demo]# make run #注意不要终止&关闭此窗口,后续操作需要此观察&验证
which: no controller-gen in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin:/usr/local/kubebuilder/bin)
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.2.5
/root/go/bin/controller-gen object:headerFile=“hack/boilerplate.go.txt” paths="./…"
go fmt ./…
go vet ./…
/root/go/bin/controller-gen “crd:trivialVersions=true” rbac:roleName=manager-role webhook paths="./…" output:crd:artifacts:config=config/crd/bases
go run ./main.go
2020-11-23T13:29:25.769+0800 INFO controller-runtime.metrics metrics server is starting to listen {“addr”: “:8081”}
2020-11-23T13:29:25.769+0800 INFO setup starting manager
2020-11-23T13:29:25.770+0800 INFO controller-runtime.manager starting metrics server {“path”: “/metrics”}
2020-11-23T13:29:25.770+0800 INFO controller-runtime.controller Starting EventSource {“controller”: “mykind”, “source”: “kind source: /, Kind=”}
2020-11-23T13:29:25.870+0800 INFO controller-runtime.controller Starting Controller {“controller”: “mykind”}
2020-11-23T13:29:25.870+0800 INFO controller-runtime.controller Starting workers {“controller”: “mykind”, “worker count”: 1}

3.5 另开一窗口,执行以下操作
3.5.1 创建CRD的实例
[root@k8s01 samples]# kubectl apply -f mycrd_mysql.yaml
mykind.mygroup.ips.com.cn/mykind-sample created
[root@k8s01 samples]# date
Wed Nov 23 13:31:46 CST 2020

此时make run窗口输出:
==================
—start Reconcile—
2020-11-23T13:31:40.906+0800 INFO controllers.Mykind —Creating deploy— {“mykind”: “default/mykind-sample”}
2020-11-23T13:31:40.919+0800 INFO controllers.Mykind —Create deploy done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:31:41.020+0800 INFO controllers.Mykind —Creating service— {“mykind”: “default/mykind-sample”}
2020-11-23T13:31:41.048+0800 INFO controllers.Mykind —Create service done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:31:41.048+0800 DEBUG controller-runtime.controller Successfully Reconciled {“controller”: “mykind”, “request”: “default/mykind-sample”}

此时CRD相关资源情况:
==================
[root@k8s01 samples]# kubectl get mykind
NAME AGE
mykind-sample 26s
[root@k8s01 samples]# kubectl describe mykind

Name:         mykind-sample
Namespace:    default
Labels:       >
Annotations:  API Version:  mygroup.ips.com.cn/v1
Kind:         Mykind
Metadata:
  Creation Timestamp:  2020-11-23T05:31:40Z
  Generation:          1
  Managed Fields:
    API Version:  mygroup.ips.com.cn/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:envs:
        f:image:
        f:nodeport:
        f:port:
        f:replicas:
    Manager:         kubectl
    Operation:       Update
    Time:            2020-11-23T05:31:40Z
  Resource Version:  2366908
  Self Link:         /apis/mygroup.ips.com.cn/v1/namespaces/default/mykinds/mykind-sample
  UID:               a0681f08-0a84-4e98-82ea-142026343143
Spec:
  Envs:
    Name:    MYSQL_ROOT_PASSWORD
    Value:   123456
  Image:     mysql:5.7
  Nodeport:  30306
  Port:      3306
  Replicas:  1
Events:      >

[root@k8s01 samples]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
mykind-sample 1/1 1 1 48s
[root@k8s01 samples]# kubectl describe deploy

Name:                   mykind-sample
Namespace:              default
CreationTimestamp:      Wed, 23 Nov 2020 13:31:40 +0800
Labels:                 >
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=mykind-sample
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=mykind-sample
  Containers:
   mykind-sample:
    Image:      mysql:5.7
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
      MYSQL_ROOT_PASSWORD:  123456
    Mounts:                 >
  Volumes:                  >
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  >
NewReplicaSet:   mykind-sample-7584496b56 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  56s   deployment-controller  Scaled up replica set mykind-sample-7584496b56 to 1

[root@k8s01 samples]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mykind-sample-7584496b56-mljld 1/1 Running 0 79s
[root@k8s01 samples]# kubectl describe pod mykind-sample-7584496b56-mljld

Name:         mykind-sample-7584496b56-mljld
Namespace:    default
Priority:     0
Node:         k8s03/192.168.100.103
Start Time:   Wed, 23 Nov 2020 13:31:41 +0800
Labels:       app=mykind-sample
              pod-template-hash=7584496b56
Annotations:  >
Status:       Running
IP:           196.159.1.25
IPs:
  IP:           196.159.1.25
Controlled By:  ReplicaSet/mykind-sample-7584496b56
Containers:
  mykind-sample:
    Container ID:   docker://89ed62183ae7ec4c4077e289054ab721981a513011367ba504a3fb045204d8e9
    Image:          mysql:5.7
    Image ID:       docker-pullable://mysql@sha256:d4ca82cee68dce98aa72a1c48b5ef5ce9f1538265831132187871b78e768aed1
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 23 Nov 2020 13:31:42 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      MYSQL_ROOT_PASSWORD:  123456
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tchmp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-tchmp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tchmp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  >
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  >  default-scheduler  Successfully assigned default/mykind-sample-7584496b56-mljld to k8s03
  Normal  Pulled     91s        kubelet, k8s03     Container image "mysql:5.7" already present on machine
  Normal  Created    90s        kubelet, k8s03     Created container mykind-sample
  Normal  Started    90s        kubelet, k8s03     Started container mykind-sample

[root@k8s01 samples]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 196.169.0.1 443/TCP 11d
mykind-sample NodePort 196.169.72.151 3306:30306/TCP 103s
[root@k8s01 samples]# kubectl describe svc mykind-sample

Name:                     mykind-sample
Namespace:                default
Labels:                   >
Annotations:              >
Selector:                 app=mykind-sample
Type:                     NodePort
IP:                       196.169.72.151
Port:                     >  3306/TCP
TargetPort:               3306/TCP
NodePort:                 >  30306/TCP
Endpoints:                196.159.1.25:3306
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   >

3.5.2 修改CRD实例的replicas
[root@k8s01 samples]# date
Wed Nov 23 13:34:12 CST 2020
[root@k8s01 samples]# vi mycrd_mysql.yaml # change replicas: 1 to 2

apiVersion: mygroup.ips.com.cn/v1
kind: Mykind
metadata:
  name: mykind-sample
spec:
  # Add fields here
  replicas: 2                    #  Pod副本期待数量
  image: mysql:5.7               #  image版本
  port: 3306                     #  mysql port
  nodeport: 30306                #  mysql port for outside
  envs:                           #  写入到容器内的环境变量,此格式主要为了复用api中已有数据结构,,也可以分开定义
      value: "123456"            #  密码内容

[root@k8s01 samples]# kubectl apply -f mycrd_mysql.yaml
mykind.mygroup.ips.com.cn/mykind-sample configured
[root@k8s01 samples]# date
Wed Nov 23 13:35:00 CST 2020

此时make run窗口输出:
==================
—start Reconcile—
2020-11-23T13:34:58.901+0800 INFO controllers.Mykind —Updating deploy— {“mykind”: “default/mykind-sample”}
2020-11-23T13:34:58.909+0800 INFO controllers.Mykind —Update deploy done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:34:58.909+0800 INFO controllers.Mykind —Updating service— {“mykind”: “default/mykind-sample”}
2020-11-23T13:34:58.928+0800 INFO controllers.Mykind —Update service done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:34:58.928+0800 DEBUG controller-runtime.controller Successfully Reconciled {“controller”: “mykind”, “request”: “default/mykind-sample”}

此时CRD相关资源情况:
==================
[root@k8s01 samples]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mykind-sample-7584496b56-mljld 1/1 Running 0 3m41s
mykind-sample-7584496b56-qplw2 1/1 Running 0 23s
[root@k8s01 samples]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 196.169.0.1 443/TCP 11d
mykind-sample NodePort 196.169.72.151 3306:30306/TCP 3m48s

3.5.3 修改CRD实例的nodeport
[root@k8s01 samples]# vi mycrd_mysql.yaml # change nodeport: 30306 to 30307

apiVersion: mygroup.ips.com.cn/v1
kind: Mykind
metadata:
  name: mykind-sample
spec:
  # Add fields here
  replicas: 2                    #  Pod副本期待数量
  image: mysql:5.7               #  image版本
  port: 3306                     #  mysql port
  nodeport: 30307                #  mysql port for outside
  envs:                           #  写入到容器内的环境变量,此格式主要为了复用api中已有数据结构,,也可以分开定义
      value: "123456"            #  密码内容

[root@k8s01 samples]# kubectl apply -f mycrd_mysql.yaml
mykind.mygroup.ips.com.cn/mykind-sample configured
[root@k8s01 samples]# date
Wed Nov 23 13:36:05 CST 2020

此时make run窗口输出:
==================
—start Reconcile—
2020-11-23T13:36:03.903+0800 INFO controllers.Mykind —Updating deploy— {“mykind”: “default/mykind-sample”}
2020-11-23T13:36:03.945+0800 INFO controllers.Mykind —Update deploy done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:36:03.945+0800 INFO controllers.Mykind —Updating service— {“mykind”: “default/mykind-sample”}
2020-11-23T13:36:03.968+0800 INFO controllers.Mykind —Update service done— {“mykind”: “default/mykind-sample”}
2020-11-23T13:36:03.968+0800 DEBUG controller-runtime.controller Successfully Reconciled {“controller”: “mykind”, “request”: “default/mykind-sample”}

此时CRD相关资源情况:
==================
[root@k8s01 samples]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mykind-sample-7584496b56-mljld 1/1 Running 0 4m34s
mykind-sample-7584496b56-qplw2 1/1 Running 0 76s
[root@k8s01 samples]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 196.169.0.1 443/TCP 11d
mykind-sample NodePort 196.169.72.151 3306:30307/TCP 4m39s

你可能感兴趣的:(云计算&虚拟化,golang,k8s,golang)