在k3s上部署Flomesh BookInfo Demo

本文主要讲述如何在k3s上部署Flomesh BookInfo Demo,在k8s上的安装步骤基本一致,酌情调整即可。

网络拓扑

接下来将要部署的demo的网络拓扑如下:

Topology

准备步骤:

  1. 准备一台建议最低配置为2核8G的虚拟机,操作系统为Ubuntu或CentOS均可。

  2. 安装k3s,版本必须1.19以上,同时禁用自带的traefik:

    export INSTALL_K3S_VERSION=v1.19.10+k3s1
    curl -sfL https://get.k3s.io | sh -s - --disable traefik --write-kubeconfig-mode 644
    
  3. 安装ClickHouse:

    Ubuntu/Debian:

    sudo apt-get install apt-transport-https ca-certificates dirmngr
    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
    
    echo "deb https://repo.clickhouse.tech/deb/stable/ main/" | sudo tee \
        /etc/apt/sources.list.d/clickhouse.list
    sudo apt-get update
    
    sudo apt-get install -y clickhouse-server clickhouse-client
    
    sudo service clickhouse-server start
    

    CentOS/RHEL:

    sudo yum install yum-utils
    sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
    sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/clickhouse.repo
    sudo yum install clickhouse-server clickhouse-client
    
    sudo /etc/init.d/clickhouse-server start
    

    更多安装方式和细节,请参照ClickHouse安装文档。

  4. 调整ClickHouse绑定的网卡。ClickHouse默认启动时候只绑定在loopback上,需要修改配置文件/etc/clickhouse-server/config.xml,找到第145行左右的如下配置项:

    
    0.0.0.0
    

    将listen_host修改为0.0.0.0或eth0的IP,保存配置,并重启ClickHouse服务:

    sudo systemctl restart clickhouse-server
    
  5. 在ClickHouse中创建用于保存日志的table:

    • 启动clickhouse-client

      clickhouse-client -h 127.0.0.1
      

      等待出现如下界面:

      root@myubuntu:~# clickhouse-client -h 127.0.0.1
      ClickHouse client version 21.5.5.12 (official build).
      Connecting to 127.0.0.1:9000 as user default.
      Connected to ClickHouse server version 21.5.5 revision 54448.
      
      myubuntu :) 
      
    • 使用下面的SQL创建日志表:

      CREATE TABLE default.log
      (
         `startTime` Int64 DEFAULT JSONExtractInt(message, 'startTime'),
         `endTime` Int64 DEFAULT JSONExtractInt(message, 'endTime'),
         `latency` Int64 DEFAULT JSONExtractInt(message, 'latency'),
         `status` Int16 DEFAULT JSONExtractInt(response, 'status'),
         `statusText` String DEFAULT JSONExtractString(response, 'statusText'),
         `protocol` String DEFAULT JSONExtractString(message, 'protocol'),
         `method` String DEFAULT JSONExtractString(message, 'method'),
         `path` String DEFAULT JSONExtractString(message, 'path'),
         `headers` String DEFAULT JSONExtractRaw(message, 'headers'),
         `body` String DEFAULT JSONExtractString(message, 'body'),
         `response` String DEFAULT JSONExtractRaw(message, 'response'),
         `response.protocol` String DEFAULT JSONExtractString(response, 'protocol'),
         `message` String
      )
      ENGINE = MergeTree
      PARTITION BY (toYYYYMM(toDateTime(startTime / 1000)))
      ORDER BY (status, startTime)
      SETTINGS index_granularity = 8192;
      

      看到类似如下输出时,表示table已成功创建:

      root@myubuntu:~# clickhouse-client -h 127.0.0.1
      ClickHouse client version 21.5.5.12 (official build).
      Connecting to 127.0.0.1:9000 as user default.
      Connected to ClickHouse server version 21.5.5 revision 54448.
      
      myubuntu :) CREATE TABLE default.log
      :-] (
      :-] `startTime` Int64 DEFAULT JSONExtractInt(message, 'startTime'),
      :-] `endTime` Int64 DEFAULT JSONExtractInt(message, 'endTime'),
      :-] `latency` Int64 DEFAULT JSONExtractInt(message, 'latency'),
      :-] `status` Int16 DEFAULT JSONExtractInt(response, 'status'),
      :-] `statusText` String DEFAULT JSONExtractString(response, 'statusText'),
      :-] `protocol` String DEFAULT JSONExtractString(message, 'protocol'),
      :-] `method` String DEFAULT JSONExtractString(message, 'method'),
      :-] `path` String DEFAULT JSONExtractString(message, 'path'),
      :-] `headers` String DEFAULT JSONExtractRaw(message, 'headers'),
      :-] `body` String DEFAULT JSONExtractString(message, 'body'),
      :-] `response` String DEFAULT JSONExtractRaw(message, 'response'),
      :-] `response.protocol` String DEFAULT JSONExtractString(response, 'protocol'),
      :-] `message` String
      :-] )
      :-] ENGINE = MergeTree
      :-] PARTITION BY (toYYYYMM(toDateTime(startTime / 1000)))
      :-] ORDER BY (status, startTime)
      :-] SETTINGS index_granularity = 8192;
      
      CREATE TABLE default.log
      (
          `startTime` Int64 DEFAULT JSONExtractInt(message, 'startTime'),
          `endTime` Int64 DEFAULT JSONExtractInt(message, 'endTime'),
          `latency` Int64 DEFAULT JSONExtractInt(message, 'latency'),
          `status` Int16 DEFAULT JSONExtractInt(response, 'status'),
          `statusText` String DEFAULT JSONExtractString(response, 'statusText'),
          `protocol` String DEFAULT JSONExtractString(message, 'protocol'),
          `method` String DEFAULT JSONExtractString(message, 'method'),
          `path` String DEFAULT JSONExtractString(message, 'path'),
          `headers` String DEFAULT JSONExtractRaw(message, 'headers'),
          `body` String DEFAULT JSONExtractString(message, 'body'),
          `response` String DEFAULT JSONExtractRaw(message, 'response'),
          `response.protocol` String DEFAULT JSONExtractString(response, 'protocol'),
          `message` String
      )
      ENGINE = MergeTree
      PARTITION BY toYYYYMM(toDateTime(startTime / 1000))
      ORDER BY (status, startTime)
      SETTINGS index_granularity = 8192
      
      Query id: 85ecd31a-07dd-4d03-9a62-8bcefa94f41f
      
      Ok.
      
      0 rows in set. Elapsed: 0.012 sec. 
      
      myubuntu :) 
      
      

部署Demo

  1. 从Training Materials将代码checkout到已经准备好的虚拟机中。

    git clone https://github.com/flomesh-io/training-materials
    
  2. 进入Demo的目录:

    cd training-materials/003-mesh/test
    

    当前目录中共有5个文件夹,以数字编号顺序排列:

    root@k3s:~/training-materials/003-mesh/test# tree .
    .
    ├── 001-deps
    │   └── cert-manager-v1.3.1.yaml
    ├── 002-operator
    │   └── pipy-operator.yaml
    ├── 003-ingress
    │   ├── ingress-pipy.yaml
    │   └── ingress.yaml
    ├── 004-sidecar
    │   └── proxy-profile.yaml
    └── 005-bookinfo
        ├── bookinfo.yaml
        └── clickhouse.yaml
    
    5 directories, 7 files
    

    接下来,将按照文件夹的数字顺序开始进行部署。

  3. 部署cert-manager

    kubectl apply -f 001-deps/
    

    执行命令后,会看到类似于以下的输出:

    root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 001-deps/
    customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
    customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
    namespace/cert-manager created
    serviceaccount/cert-manager-cainjector created
    serviceaccount/cert-manager created
    serviceaccount/cert-manager-webhook created
    clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
    clusterrole.rbac.authorization.k8s.io/cert-manager-view created
    clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
    clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
    clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
    clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
    role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
    role.rbac.authorization.k8s.io/cert-manager:leaderelection created
    role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
    rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
    rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
    rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
    service/cert-manager created
    service/cert-manager-webhook created
    deployment.apps/cert-manager-cainjector created
    deployment.apps/cert-manager created
    deployment.apps/cert-manager-webhook created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
    

    检查cert-manager namespace下的pod,当看到cert-manager相关组件都已处于Running状态时,说明cert-manager已经成功安装,可以进行下一步pipy-operator的部署:

    root@k3s:~/training-materials/003-mesh/test# kubectl get po -n cert-manager
    NAME                                       READY   STATUS    RESTARTS   AGE
    cert-manager-cainjector-59f76f7fff-vzrqb   1/1     Running   0          37s
    cert-manager-59f6c76f4b-g7lv2              1/1     Running   0          37s
    cert-manager-webhook-565d54dd68-cf77v      1/1     Running   0          37s
    
  4. 部署pipy-operator:

    kubectl apply -f 002-operator/
    

    执行命令后,将看到类似于以下的输出:

    root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 002-operator/
    namespace/flomesh-system created
    customresourcedefinition.apiextensions.k8s.io/proxies.flomesh.io created
    customresourcedefinition.apiextensions.k8s.io/proxyprofiles.flomesh.io created
    serviceaccount/flomesh-controller-manager created
    role.rbac.authorization.k8s.io/flomesh-leader-election-role created
    clusterrole.rbac.authorization.k8s.io/flomesh-manager-role created
    clusterrole.rbac.authorization.k8s.io/flomesh-metrics-reader created
    clusterrole.rbac.authorization.k8s.io/flomesh-proxy-role created
    rolebinding.rbac.authorization.k8s.io/flomesh-leader-election-rolebinding created
    clusterrolebinding.rbac.authorization.k8s.io/flomesh-manager-rolebinding created
    clusterrolebinding.rbac.authorization.k8s.io/flomesh-proxy-rolebinding created
    configmap/flomesh-manager-config created
    configmap/flomesh-proxy-injector-tpl created
    service/flomesh-controller-manager-metrics-service created
    service/flomesh-pipy-sidecar-injector-service created
    service/flomesh-webhook-service created
    deployment.apps/flomesh-controller-manager created
    deployment.apps/flomesh-pipy-sidecar-injector created
    certificate.cert-manager.io/flomesh-serving-cert created
    issuer.cert-manager.io/flomesh-selfsigned-issuer created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/flomesh-mutating-webhook-configuration created
    mutatingwebhookconfiguration.admissionregistration.k8s.io/flomesh-sidecar-injector-webhook-configuration created
    validatingwebhookconfiguration.admissionregistration.k8s.io/flomesh-validating-webhook-configuration created
    

    检查flomesh-system namespace下的pod,当看到flomesh-system相关组件都已处于Running状态时,说明pipy-operator已经成功安装,可以进行下一步Ingress Controller的部署:

    root@k3s:~/training-materials/003-mesh/test# kubectl get po -n flomesh-system
    NAME                                             READY   STATUS    RESTARTS   AGE
    flomesh-pipy-sidecar-injector-655994b6c6-fk2f8   1/1     Running   0          85s
    flomesh-controller-manager-8db7d9986-lt75l       1/1     Running   0          85s
    
  5. 部署ingress-pipy,这一步需要分成两个阶段来进行部署:

    • 首先部署Ingress Controller:

      kubectl apply -f 003-ingress/ingress-pipy.yaml 
      

      执行命令后,会看到类似于以下的输出:

      root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 003-ingress/ingress-pipy.yaml 
      namespace/ingress-pipy created
      customresourcedefinition.apiextensions.k8s.io/ingressparameters.flomesh.io created
      serviceaccount/ingress-pipy created
      role.rbac.authorization.k8s.io/ingress-pipy-leader-election-role created
      clusterrole.rbac.authorization.k8s.io/ingress-pipy-role created
      rolebinding.rbac.authorization.k8s.io/ingress-pipy-leader-election-rolebinding created
      clusterrolebinding.rbac.authorization.k8s.io/ingress-pipy-rolebinding created
      configmap/ingress-config created
      service/ingress-pipy-cfg created
      service/ingress-pipy-controller created
      service/ingress-pipy-defaultbackend created
      service/webhook-service created
      deployment.apps/ingress-pipy-cfg created
      deployment.apps/ingress-pipy-controller created
      deployment.apps/ingress-pipy-manager created
      certificate.cert-manager.io/serving-cert created
      issuer.cert-manager.io/selfsigned-issuer created
      mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating-webhook-configuration created
      validatingwebhookconfiguration.admissionregistration.k8s.io/validating-webhook-configuration created
      

      检查ingress-pipy namespace下的pod,如果都已处于Running状态,则ingress-pipy已经安装成功,可以继续创建Ingress。

      root@k3s:~/training-materials/003-mesh/test# kubectl get po -n ingress-pipy 
      NAME                                       READY   STATUS    RESTARTS   AGE
      svclb-ingress-pipy-controller-7hv48        1/1     Running   0          114s
      ingress-pipy-cfg-6856d674f7-5vvrv          1/1     Running   0          114s
      ingress-pipy-controller-76cd866d78-nqftz   1/1     Running   0          114s
      ingress-pipy-manager-6dddc98484-lzd67      1/1     Running   0          114s
      
    • 创建Ingress:

      kubectl apply -f 003-ingress/ingress.yaml 
      

      会看到如下输出:

      root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 003-ingress/ingress.yaml 
      ingressclass.networking.k8s.io/pipy created
      ingress.networking.k8s.io/ingress-pipy-bookinfo created
      
  6. 下面需要创建一个ProxyProfile,为需要注入sidecar的POD提供sidecar的配置文件:

    kubectl apply -f 004-sidecar/
    

    会看到如下的输出:

    root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 004-sidecar/
    proxyprofile.flomesh.io/pf-bookinfo-demo created
    

    检查一下ProxyProfile是否已经成功创建:

    root@k3s:~/training-materials/003-mesh/test# kubectl get pf
    NAME               SELECTOR                                                    NAMESPACE   AGE
    pf-bookinfo-demo   {"matchLabels":{"sys":"bookinfo-samples","version":"v1"}}   default     52s
    
  7. 最后来进行BookInfo应用的部署:

    • 首先,编辑005-bookinfo/clickhouse.yaml,将subsets.addresses.ip更改为虚拟机的本地IP(注意不是loopback地址),以便可以从k3s内部访问ClickHouse

      apiVersion: v1
      kind: Endpoints
      metadata:
        name: samples-clickhouse
        labels:
          app: clickhouse
          service: clickhouse
      subsets:
        - addresses:
          - ip: 10.0.0.60
          ports:
          - name: chdb
            port: 8123
            protocol: TCP
      
    • 保存后,即可进行BookInfo应用的部署:

      kubectl apply -f 005-bookinfo/
      

      执行命令后会看到如下输出:

      root@k3s:~/training-materials/003-mesh/test# kubectl apply -f 005-bookinfo/
      deployment.apps/samples-discovery-server-v1 created
      deployment.apps/samples-config-service-v1 created
      deployment.apps/samples-api-gateway-v1 created
      deployment.apps/samples-bookinfo-details-v1 created
      deployment.apps/samples-bookinfo-ratings-v1 created
      deployment.apps/samples-bookinfo-reviews-v1 created
      service/samples-discovery-server created
      service/samples-config-service created
      service/samples-api-gateway created
      service/samples-bookinfo-details created
      service/samples-bookinfo-ratings created
      service/samples-bookinfo-reviews created
      endpoints/samples-clickhouse created
      service/samples-clickhouse created
      

      BookInfo被部署在default namespace下,检查各个pod的状态,直到全部变为Running:

      root@k3s:~/training-materials/003-mesh/test# kubectl get po
      NAME                                           READY   STATUS    RESTARTS   AGE
      samples-discovery-server-v1-7768d9894b-j4v6g   2/2     Running   0          3m26s
      samples-config-service-v1-84888bfb5b-qlxq8     1/1     Running   0          3m26s
      samples-bookinfo-details-v1-55cd5b7d75-5v88z   2/2     Running   0          3m26s
      samples-api-gateway-v1-5bbf495495-sbn8w        2/2     Running   0          3m26s
      samples-bookinfo-reviews-v1-64664bc6fc-zpvt2   2/2     Running   0          3m26s
      samples-bookinfo-ratings-v1-7c98464bc9-xlc6x   2/2     Running   0          3m26s
      

      检查服务的日志,确保服务已经启动完成:

      kubectl logs samples-bookinfo-reviews-v1-64664bc6fc-zpvt2 -c app
      

      当看到类似如下日志时,表明服务已经成功启动:

      2021-05-30 05:59:00.256  INFO [bookinfo-reviews,,,] 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_BOOKINFO-REVIEWS/samples-bookinfo-reviews-v1-64664bc6fc-zpvt2:bookinfo-reviews:8102: registering service...
      2021-05-30 05:59:00.295  INFO [bookinfo-reviews,,,] 1 --- [           main] io.undertow                              : starting server: Undertow - 2.1.6.Final
      2021-05-30 05:59:00.327  INFO [bookinfo-reviews,,,] 1 --- [           main] org.xnio                                 : XNIO version 3.8.0.Final
      2021-05-30 05:59:00.328  INFO [bookinfo-reviews,,,] 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient    : DiscoveryClient_BOOKINFO-REVIEWS/samples-bookinfo-reviews-v1-64664bc6fc-zpvt2:bookinfo-reviews:8102 - registration status: 204
      2021-05-30 05:59:00.346  INFO [bookinfo-reviews,,,] 1 --- [           main] org.xnio.nio                             : XNIO NIO Implementation Version 3.8.0.Final
      2021-05-30 05:59:00.387  INFO [bookinfo-reviews,,,] 1 --- [           main] org.jboss.threads                        : JBoss Threads version 3.1.0.Final
      2021-05-30 05:59:00.462  INFO [bookinfo-reviews,,,] 1 --- [           main] o.s.b.w.e.undertow.UndertowWebServer     : Undertow started on port(s) 8102 (http)
      2021-05-30 05:59:00.463  INFO [bookinfo-reviews,,,] 1 --- [           main] .s.c.n.e.s.EurekaAutoServiceRegistration : Updating port to 8102
      2021-05-30 05:59:00.503  INFO [bookinfo-reviews,,,] 1 --- [           main] i.f.bookinfo.reviews.ReviewsApplication  : Started ReviewsApplication in 46.855 seconds (JVM running for 50.107)
      
      

测试Demo

  1. 确定Ingress Controller绑定的IP:

    root@k3s:~/training-materials/003-mesh/test# kubectl -n ingress-pipy get svc ingress-pipy-controller
    NAME                      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
    ingress-pipy-controller   LoadBalancer   10.43.92.37   10.0.2.15     80:31255/TCP   21m
    

    EXTERNAL-IP列的地址即为Ingress Controller绑定的外部IP,这里为10.0.2.15

  2. 测试创建rating:

    curl -X POST http://10.0.2.15/bookinfo-ratings/ratings \
     -H "Content-Type: application/json" \
     -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"a071c269-369c-4f79-be03-6a41f27d6b5f","rating":3}' 
    

    测试查询rating:

    curl http://10.0.2.15/bookinfo-ratings/ratings/a071c269-369c-4f79-be03-6a41f27d6b5f
    
  3. 测试创建review:

    curl -X POST http://10.0.2.15/bookinfo-reviews/reviews \
     -H "Content-Type: application/json" \
     -d '{"reviewerId":"9bc908be-0717-4eab-bb51-ea14f669ef20","productId":"a071c269-369c-4f79-be03-6a41f27d6b5f","review":"This was OK.","rating":3}'
    

    测试查询review:

    curl http://10.0.2.15/bookinfo-reviews/reviews/a071c269-369c-4f79-be03-6a41f27d6b5f
    
  4. 测试查询book details:

    curl http://10.0.2.15/bookinfo-details/details/1234567890
    

查询日志

  1. 运行clickhouse-client,连接到ClickHouse实例:

    root@myubuntu:~# clickhouse-client -h 127.0.0.1
    ClickHouse client version 21.5.5.12 (official build).
    Connecting to 127.0.0.1:9000 as user default.
    Connected to ClickHouse server version 21.5.5 revision 54448.
    
    myubuntu :) 
    
  1. 查询sidecar捕获到的应用日志:

    myubuntu :) select * from default.log where path <> '' limit 1
    
    SELECT *
    FROM default.log
    WHERE path != ''
    LIMIT 1
    
    Query id: 067df10f-9175-48b1-93d7-ccc01945eb27
    
    ┌─────startTime─┬───────endTime─┬─latency─┬─status─┬─statusText─┬─protocol─┬─method─┬─path────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─headers────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─body─┬─response───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─response.protocol─┬─message────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
    │ 1622355980625 │ 1622355980626 │       1 │    200 │ OK         │ HTTP/1.1 │ PUT    │ /eureka/apps/BOOKINFO-DETAILS/samples-bookinfo-details-v1-55cd5b7d75-5v88z:bookinfo-details:8103?status=UP&lastDirtyTimestamp=1622354330216 │ {"discoveryidentity-version":"1.4","accept-encoding":"gzip","discoveryidentity-name":"DefaultClient","user-agent":"Java-EurekaClient/v1.10.11","host":"samples-discovery-server:8771","discoveryidentity-id":"10.42.0.19"} │      │ {"body":"","statusText":"OK","status":200,"headers":{"connection":"keep-alive","date":"Sun, 30 May 2021 06:26:20 GMT","content-type":"application/xml"},"protocol":"HTTP/1.1"} │ HTTP/1.1          │ {"response":{"body":"","statusText":"OK","status":200,"headers":{"connection":"keep-alive","date":"Sun, 30 May 2021 06:26:20 GMT","content-type":"application/xml"},"protocol":"HTTP/1.1"},"latency":1,"method":"PUT","headers":{"discoveryidentity-version":"1.4","accept-encoding":"gzip","discoveryidentity-name":"DefaultClient","user-agent":"Java-EurekaClient/v1.10.11","host":"samples-discovery-server:8771","discoveryidentity-id":"10.42.0.19"},"startTime":1622355980625,"endTime":1622355980626,"body":"","path":"/eureka/apps/BOOKINFO-DETAILS/samples-bookinfo-details-v1-55cd5b7d75-5v88z:bookinfo-details:8103?status=UP&lastDirtyTimestamp=1622354330216","protocol":"HTTP/1.1"} │
    └───────────────┴───────────────┴─────────┴────────┴────────────┴──────────┴────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
    
    1 rows in set. Elapsed: 0.007 sec. 
    
    myubuntu :) 
    
    

你可能感兴趣的:(在k3s上部署Flomesh BookInfo Demo)