k8s中Service负载均衡和Service类型介绍

目录

一.service介绍

二.service参数详解

三.定义service的两种方式

1.命令行expose

2.yaml文件

四.service负载均衡配置

1.kube-proxy代理模式

(1)设置ipvs

(2)负载均衡调度策略

2.会话保持

3.案例演示

五.四种Service类型

1.clusterip

2.NodePort

3.LoadBalancer

(1)先在集群master上开启kube-proxy的strictARP,来使所有网卡停止响应其他网卡请求,以openelb来替代。

(2)应用下载openelb.yaml(需要修改镜像地址)

(3)编写yaml文件来添加eip地址池

(4)创建service来验证

4.ExternalName


一.service介绍

之前我们讲到的pod创建,里面有服务需要被集群内部访问或被外界访问,这样情况我们就需要借助service来为应用提供统一入口地址,他主要提供网络服务,将请求按负载均衡算法分发到各个容器。在访问时,pod的IP地址时会变化的,显然在pod提供稳定服务时不能通过IP地址去访问。

二.service参数详解

apiVersion: v1  #必写
kind: Service   #必写
metadata:    #必写
  annotations:   #自定义的注解属性列表
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-nginx","namespace":"myns"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"name":"my-nginx"}}}
  creationTimestamp: "2023-12-02T01:18:01Z"
  name: my-nginx   #必写
  namespace: myns   #必写,建议和你创建pod和pod控制器的名称空间一致
  resourceVersion: "1537"
  uid: ab1bb8ce-be87-48d4-8396-5e802dfbca8c
spec:   #必写
  clusterIP: 10.109.39.11  #虚拟IP,当type=ClusterIP时设置,可以自己指定,也可以不写等系统自己分配
  clusterIPs:
  - 10.109.39.11
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:   #端口暴露情况
  - port: 80  #服务监听的端口
    protocol: TCP  #端口协议,默认TCP,支持TCP/UDP
    targetPort: 80  #转发到后端pod的端口
    nodePort: number   #type=NodePort时设置,映射到主机的端口,可以自己指定也可以不写等系统分配,设置了type=NodePort,其他节点和外界就可以通过“此主机地址+这个端口号”进行访问
  selector:   #选择器,必写
    name: my-nginx  #注意一致性
  sessionAffinity: None  #是否支持session,默认none,也可以填写ClientIP,表示根据客户端IP来将同一个客户端请求分配到同一个pod
  type: ClusterIP  #类型选择,ClusterIP、NodePort、LoadBalancer,后面会详细介绍
status:   #当type=LoadBalancer时这只,设置外部负载均衡器的地址(公有云环境),后面演示的时候介绍
  loadBalancer: {}

三.定义service的两种方式

1.命令行expose

这里创建一个关于nginx服务的3个副本数的pod,并且使用expose方式为其创建service,其中,--port=80指定nginx服务监听端口,--type=ClusterIP指定类型, --target-port=80指定转发到后端某端口,并使用暴露出来的IP进行访问验证是否成功。

[root@k8s-master service]# cat service1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx
  template:
    metadata:
      labels:
        name: my-nginx
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
[root@k8s-master service]# kubectl get pods -n myns
NAME                       READY   STATUS    RESTARTS   AGE
my-nginx-7c787d8bb-g6fb5   1/1     Running   0          9s
my-nginx-7c787d8bb-t5jdh   1/1     Running   0          9s
my-nginx-7c787d8bb-znd22   1/1     Running   0          9s
[root@k8s-master service]# kubectl get deploy -n myns
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
my-nginx   3/3     3            3           15s
​
[root@k8s-master service]# kubectl expose deployment my-nginx -n myns --port=80 --type=ClusterIP --target-port=80 
service/my-nginx exposed
[root@k8s-master service]# kubectl get service -n myns  #使用下方IP进行访问
NAME       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
my-nginx   ClusterIP   10.111.4.81           80/TCP    6s
​
[root@k8s-master service]# curl 10.111.4.81



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

​ [root@k8s-node1 ~]# curl 10.111.4.81   #在node1上访问,验证ClusterIP集群内部访问是否成功 Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

2.yaml文件

这里在上面的deployment基础上继续配置service,ports部分和selector部分显得尤为重要,具体看代码注释

[root@k8s-master service]# cat service1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
spec:
  ports:   
  - protocol: TCP   #TCP协议
    targetPort: 80    #转发到后端pod的80端口
    port: 80   #服务监听80端口
  selector:   #与deploy上面的模板进行匹配,表示为标签为name: my-nginx-deploy的pod开放服务
    name: my-nginx-deploy        
​
[root@k8s-master service]# kubectl apply -f service1.yaml 
deployment.apps/my-nginx created
service/my-nginx-service created
[root@k8s-master service]# kubectl get service -n myns
NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
my-nginx-service   ClusterIP   10.98.64.75           80/TCP    7s
[root@k8s-master service]# curl 10.98.64.75



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

四.service负载均衡配置

1.kube-proxy代理模式

这里主要介绍ipvs代理,他实现从service到后端endpoint的负载分发任务,相较于旧版本的userspace和iptables来讲,ipvs具有更高的转发效率和吞吐率,也支持更多的负载均衡策略接下来介绍如何开启ipvs(之前介绍label那篇文章也讲到过),若不开启,会自动切换到iptables。

(1)设置ipvs

[root@k8s-master service]# lsmod | grep ip_vs   #加载查看内核模块
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
​
#将node部分改为ipvs
[root@k8s-master service]# kubectl edit configmap kube-proxy -n kube-system 
configmap/kube-proxy edited
“
metricsBindAddress: ""
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: null
”
​
#删除kube-proxy的pod,自动重新拉取
[root@k8s-master service]# kubectl get pods -n kube-system | grep kube-proxy
kube-proxy-95q7f                           1/1     Running   0          94m
kube-proxy-qf7wh                           1/1     Running   0          92m
kube-proxy-rtg5c                           1/1     Running   0          92m
[root@k8s-master service]# kubectl delete pod kube-proxy-95q7f kube-proxy-qf7wh kube-proxy-rtg5c -n kube-system 
pod "kube-proxy-95q7f" deleted
pod "kube-proxy-qf7wh" deleted
pod "kube-proxy-rtg5c" deleted
[root@k8s-master service]# kubectl get pods -n kube-system | grep kube-proxy
kube-proxy-7b5fc                           1/1     Running   0          6s
kube-proxy-pvv6k                           1/1     Running   0          6s
kube-proxy-vbfnd                           1/1     Running   0          6s
​
#验证生效
[root@k8s-master service]# kubectl logs kube-proxy-7b5fc -n kube-system | grep ipvs
I1202 02:44:06.831781       1 server_others.go:218] "Using ipvs Proxier"
[root@k8s-master service]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.2.150:30572 rr
  -> 10.244.36.75:80              Masq    1      0          0         
  -> 10.244.169.147:80            Masq    1      0          0         
  -> 10.244.169.148:80            Masq    1      0          0         
TCP  10.96.0.1:443 rr
  -> 192.168.2.150:6443           Masq    1      1          0         
TCP  10.96.0.10:53 rr
  -> 10.244.235.193:53            Masq    1      0          0         
  -> 10.244.235.194:53            Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.235.193:9153          Masq    1      0          0         
  -> 10.244.235.194:9153          Masq    1      0          0         
TCP  10.98.197.131:80 rr
  -> 10.244.36.75:80              Masq    1      0          0         
  -> 10.244.169.147:80            Masq    1      0          0         
  -> 10.244.169.148:80            Masq    1      0          0         
TCP  10.244.235.192:30572 rr
  -> 10.244.36.75:80              Masq    1      0          0         
  -> 10.244.169.147:80            Masq    1      0          0         
  -> 10.244.169.148:80            Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.235.193:53            Masq    1      0          0         
  -> 10.244.235.194:53            Masq    1      0          0      

(2)负载均衡调度策略

rr(Round Robin):轮询算法,将请求按照顺序依次分发给后端服务器。每个请求都按照先后顺序分配给下一个服务器,直到所有服务器都被分配到一个请求。然后再从头开始循环。

lc(Least Connections):最小连接数算法,将请求分发给当前连接数最少的服务器。通过监视服务器上的活动连接数并选择最少连接的服务器,可以实现负载均衡。

dh(Destination Hashing):目标哈希算法,根据请求的特定目标信息(例如源 IP 地址或会话 ID)计算哈希值,并将请求分发给与哈希值匹配的服务器。这样可以确保相同的请求始终被分发到相同的服务器上。

sh(Source Hashing):源哈希算法,类似于目标哈希算法,但是使用源 IP 地址而不是目标信息来计算哈希值。这样可以确保来自同一来源的请求始终被发送到同一台服务器。

sed(Shortest Expected Delay):最短期望延迟算法,根据每个服务器的预计延迟时间来选择服务器。该算法会考虑服务器的负载和延迟,并选择具有最短预计延迟的服务器来处理请求。

nq(Nginx Queue):Nginx 队列算法(永不排队),将请求放入队列中,并按照特定规则进行调度。这种算法通常与 Nginx 反向代理服务器一起使用,可以根据不同的规则(例如权重、连接数等)进行请求调度。

2.会话保持

通过sessionAffinity设置首次将客户端发起的请求发送某pod,之后的该客户端的请求都发往此pod,同时还可以配置timeoutSeconds:为其设置会话保持时间,详情见案例演示。

3.案例演示

没设置会话保持时,自动按照算法按照调度给3个pod(pod配置不同的页面以方面验证)

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: my-nginx-deploy
  type: ClusterIP
​
[root@k8s-master service]# curl 10.107.18.89
pod3
[root@k8s-master service]# curl 10.107.18.89
pod2
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod3
[root@k8s-master service]# curl 10.107.18.89
pod2

配置会话保持

[root@k8s-master service]# cat service1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
spec:
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: my-nginx-deploy
  type: ClusterIP
​
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1
[root@k8s-master service]# curl 10.107.18.89
pod1

五.四种Service类型

1.clusterip

上面已经讲到clusterip是集群内部访问类型,并且已经演示,接下来介绍其他类型

2.NodePort

这个类型使得服务不仅可以被集群内部访问,还可以被集群外部访问,nodeport暴露的是TCP4层,但会对集群节点主机端口产生占用,不适合大规模使用。需要注意的是:指定了类型为nodeport后,指定或自定暴露出来的端口的node的port(主机port),那么集群外部访问就需要使用node的ip(主机ip)+ node的port去访问。若是指定端口,应保持在30000-32767这个范围内。

[root@k8s-master service]# cat service1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30572  #指定暴露node上的端口
  selector:
    name: my-nginx-deploy 
  type: NodePort  #指定类型
​
[root@k8s-master service]# kubectl apply -f service1.yaml 
deployment.apps/my-nginx unchanged
service/my-nginx-service created
[root@k8s-master service]# kubectl get service -n myns
NAME               TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
my-nginx-service   NodePort   10.98.197.131           80:30572/TCP   9s
​
#如下,集群内部同样可以使用clusterip进行访问
[root@k8s-node1 ~]# curl 10.98.197.131



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

  如下图,集群外部要使用主机地址+该端口进行访问

k8s中Service负载均衡和Service类型介绍_第1张图片

3.LoadBalancer

LoadBalancer类型专属于云服务,可以动态分配网关,这里介绍主要介绍openelb。

OpenELB是一个开源的企业级负载均衡器,它为 Kubernetes 集群提供了强大的负载均衡功能。OpenELB 通过与 Kubernetes API 交互来获取服务和端点信息,并与 Kubernetes的内部组件(如 etcd)进行通信,以获取集群状态信息。这使得 OpenELB 能够动态地感知到整个集群的状态和服务变化。他可以根据 Kubernetes 中服务和端点的变化动态地更新负载均衡策略,确保流量能够按照需求正确地路由到后端 Pod。接下来讲述如何部署openelb

(1)先在集群master上开启kube-proxy的strictARP,来使所有网卡停止响应其他网卡请求,以openelb来替代。

[root@k8s-master service]# kubectl edit configmap kube-proxy -n kube-system
#将strictARP改为true

k8s中Service负载均衡和Service类型介绍_第2张图片

(2)应用下载openelb.yaml(需要修改镜像地址)

这里的文件已经修改好了的,一般是需要将1267和1300行两处的image地址更改(kubespheredev/kube-webhook-certgen:v1.1.1)

链接:百度网盘 请输入提取码 提取码:df6b

k8s中Service负载均衡和Service类型介绍_第3张图片

[root@k8s-master service]# kubectl apply -f openelb.yaml
[root@k8s-master service]# kubectl get pods -n openelb-system
NAME                              READY   STATUS      RESTARTS   AGE
openelb-admission-create-g4q5f    0/1     Completed   0          17s
openelb-admission-patch-j679s     0/1     Completed   0          17s
openelb-keepalive-vip-jk5lh       1/1     Running     0          8s
openelb-keepalive-vip-xcjpw       1/1     Running     0          8s
openelb-manager-99b49789c-ssn4x   1/1     Running     0          17s

(3)编写yaml文件来添加eip地址池

[root@k8s-master service]# vim ip-pool.yaml
apiVersion: network.kubesphere.io/v1alpha2   #版本可以在成功应用openelb之后使用kubectl explain Eip.apiVersion来查看
kind: Eip
metadata:
  name: my-eip-pool
spec:
  address: 192.168.2.11-192.168.2.20    #写一段没有使用的IP地址范围作为ip-pool,要与主机位于同一网段
  protocol: layer2
  disable: false
  interface: ens33   #指定主机网卡名称
​
[root@k8s-master service]# kubectl apply -f ip-pool.yaml 
eip.network.kubesphere.io/my-eip-pool created
[root@k8s-master service]# kubectl get eip
NAME          CIDR                        USAGE   TOTAL
my-eip-pool   192.168.2.11-192.168.2.20           10

(4)创建service来验证

[root@k8s-master service]# cat service1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
  annotations:   #这三行详情也要添加,尤为重要
    lb.kubesphere.io/v1alpha1: openelb
    protocol.openelb.kubesphere.io/v1alpha1: layer2  
    eip.openelb.kubesphere.io/v1alpha2: my-eip-pool   #指定创建地址池时指定的名称
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: my-nginx-deploy
  type: LoadBalancer   #指定type为LoadBalancer
​
[root@k8s-master service]# kubectl apply -f  service1.yaml 
deployment.apps/my-nginx created
service/my-nginx-service created
[root@k8s-master service]# kubectl get service my-nginx-service -n myns  #验证时访问192.168.2.11即可
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
my-nginx-service   LoadBalancer   10.107.18.126   192.168.2.11   80:30921/TCP   20s
​
[root@k8s-master service]# curl 192.168.2.11



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

[root@k8s-master service]# kubectl get pods -n myns NAME                       READY   STATUS   RESTARTS   AGE my-nginx-5d67c8f488-bvvcm   1/1     Running   0         64s my-nginx-5d67c8f488-cf9vd   1/1     Running   0         64s my-nginx-5d67c8f488-jrhgt   1/1     Running   0         64s [root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-bvvcm -n myns -- /bin/sh -c "echo pod1 > /usr/share/nginx/html/index.html" [root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-cf9vd -n myns -- /bin/sh -c "echo pod2 > /usr/share/nginx/html/index.html" [root@k8s-master service]# kubectl exec -it my-nginx-5d67c8f488-jrhgt -n myns -- /bin/sh -c "echo pod3 > /usr/share/nginx/html/index.html" [root@k8s-master service]# curl 192.168.2.11 pod2 [root@k8s-master service]# curl 192.168.2.11 pod1 [root@k8s-master service]# curl 192.168.2.11 pod1 [root@k8s-master service]# curl 192.168.2.11 pod3

4.ExternalName

它允许将 Kubernetes 集群内部的服务映射到集群外部的服务地址。这种服务类型通常用于需要访问集群外部服务的情况,在 Pod 内部,你可以通过该 Service 的名称来进行访问,Kubernetes 会负责将请求路由到外部服务地址

[root@k8s-master service]# cat service2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: my-nginx
  name: my-nginx
  namespace: myns
spec:
  replicas: 3
  selector:
    matchLabels:
      name: my-nginx-deploy
  template:
    metadata:
      labels:
        name: my-nginx-deploy
    spec:
      containers:
      - name: my-nginx-pod
        image: nginx
        ports:
        - containerPort: 80
​
---
​
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-service
  namespace: myns
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    name: my-nginx-deploy
  type: ExternalName   #指定类型为ExternalName
  externalName: www.baidu.com   #要访问的外部地址,可以是域名、IP等
​
[root@k8s-master service]# kubectl get pods -n myns
NAME                        READY   STATUS    RESTARTS   AGE
my-nginx-5d67c8f488-48dsc   1/1     Running   0          19m
my-nginx-5d67c8f488-mn9qt   1/1     Running   0          19m
my-nginx-5d67c8f488-xgbgw   1/1     Running   0          19m
​
# nslookup my-nginx-service
Server:     10.96.0.10
Address:    10.96.0.10#53
my-nginx-service.myns.svc.cluster.local canonical name = www.baidu.com.
Name:   www.baidu.com
Address: 39.156.66.14
Name:   www.baidu.com
Address: 39.156.66.18
Name:   www.baidu.com
Address: 2409:8c00:6c21:104f:0:ff:b03f:3ae
Name:   www.baidu.com
Address: 2409:8c00:6c21:1051:0:ff:b0af:279a
# ping my-nginx-service
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: icmp_seq=0 ttl=127 time=43.167 ms
64 bytes from 39.156.66.18: icmp_seq=1 ttl=127 time=147.273 ms
64 bytes from 39.156.66.18: icmp_seq=2 ttl=127 time=53.310 ms
^C--- www.baidu.com ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 43.167/81.250/147.273/46.869 ms

你可能感兴趣的:(Linux,#,k8s,kubernetes,clusterip,nodeport,loadbalancer,externalname)