本文主要利用Knative Serving 部署一个gRPC服务。
此示例可用于在knative服务中试用gRPC,HTTP / 2和自定义端口配置。
容器镜像由两个二进制文件构建:服务器和客户端。这样做是为了便于测试,不建议将其用于生产容器。
构建和部署示例代码
1:clone代码仓库
git clone -b "release-0.14" https://github.com/knative/docs knative-docs
cd knative-docs/docs/serving/samples/grpc-ping-go
使用Docker来为此服务构建容器镜像,并将其推送到Docker Hub。
将{username}替换为您的Docker Hub用户名,然后运行以下命令:
# Build the container on your local machine.
docker build --tag "{username}/grpc-ping-go" .
# Push the container to docker registry.
docker push "{username}/grpc-ping-go"
3: 更新项目中的service.yaml文件以引用步骤1中发布的镜像。
用您的Docker Hub用户名替换service.yaml中的{username}:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: grpc-ping
namespace: default
spec:
template:
spec:
containers:
- image: docker.io/{username}/grpc-ping-go
ports:
- name: h2c
containerPort: 8080
4:使用 kubectl
部署服务
kubectl apply --filename service.yaml
service.serving.knative.dev/grpc-ping created
Exploring
部署后,您可以使用kubectl
命令检查创建的资源:
首先查看knative service:
# This will show the Knative service that we created:
kubectl get ksvc grpc-ping --output yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
serving.knative.dev/creator: jenkins
serving.knative.dev/lastModifier: jenkins
creationTimestamp: "2020-05-13T01:55:44Z"
generation: 1
name: grpc-ping
namespace: default
resourceVersion: "2201773"
selfLink: /apis/serving.knative.dev/v1/namespaces/default/services/grpc-ping
uid: 7977a697-c413-459f-852c-60e5adf3dccc
spec:
template:
metadata:
creationTimestamp: null
spec:
containerConcurrency: 0
containers:
- image: docker.io/iyacontrol/grpc-ping-go
name: user-container
ports:
- containerPort: 8080
name: h2c
readinessProbe:
successThreshold: 1
tcpSocket:
port: 0
resources: {}
timeoutSeconds: 300
traffic:
- latestRevision: true
percent: 100
status:
address:
url: http://grpc-ping.default.svc.cluster.local
conditions:
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: ConfigurationsReady
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: Ready
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: RoutesReady
latestCreatedRevisionName: grpc-ping-gcltn
latestReadyRevisionName: grpc-ping-gcltn
observedGeneration: 1
traffic:
- latestRevision: true
percent: 100
revisionName: grpc-ping-gcltn
url: http://grpc-ping.default.serverless.xx.me
查看knative route:
# This will show the Route, created by the service:
kubectl get route grpc-ping --output yaml
apiVersion: serving.knative.dev/v1
kind: Route
metadata:
annotations:
serving.knative.dev/creator: jenkins
serving.knative.dev/lastModifier: jenkins
creationTimestamp: "2020-05-13T01:55:44Z"
finalizers:
- routes.serving.knative.dev
generation: 1
labels:
serving.knative.dev/service: grpc-ping
name: grpc-ping
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Service
name: grpc-ping
uid: 7977a697-c413-459f-852c-60e5adf3dccc
resourceVersion: "2201772"
selfLink: /apis/serving.knative.dev/v1/namespaces/default/routes/grpc-ping
uid: 8455e488-2ac2-4e1e-8b51-8d20e1836801
spec:
traffic:
- configurationName: grpc-ping
latestRevision: true
percent: 100
status:
address:
url: http://grpc-ping.default.svc.cluster.local
conditions:
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: AllTrafficAssigned
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: IngressReady
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: Ready
observedGeneration: 1
traffic:
- latestRevision: true
percent: 100
revisionName: grpc-ping-gcltn
url: http://grpc-ping.default.serverless.xx.me
查看knative configurations:
# This will show the Configuration, created by the service:
kubectl get configurations grpc-ping --output yaml
apiVersion: serving.knative.dev/v1
kind: Configuration
metadata:
annotations:
serving.knative.dev/creator: jenkins
serving.knative.dev/lastModifier: jenkins
creationTimestamp: "2020-05-13T01:55:44Z"
generation: 1
labels:
serving.knative.dev/route: grpc-ping
serving.knative.dev/service: grpc-ping
name: grpc-ping
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Service
name: grpc-ping
uid: 7977a697-c413-459f-852c-60e5adf3dccc
resourceVersion: "2201750"
selfLink: /apis/serving.knative.dev/v1/namespaces/default/configurations/grpc-ping
uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613
spec:
template:
metadata:
creationTimestamp: null
spec:
containerConcurrency: 0
containers:
- image: docker.io/iyacontrol/grpc-ping-go
name: user-container
ports:
- containerPort: 8080
name: h2c
readinessProbe:
successThreshold: 1
tcpSocket:
port: 0
resources: {}
timeoutSeconds: 300
status:
conditions:
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: Ready
latestCreatedRevisionName: grpc-ping-gcltn
latestReadyRevisionName: grpc-ping-gcltn
observedGeneration: 1
通过configurations我们可以知道revision的名字是grpc-ping-gcltn,我们的查看 knative revision:
# This will show the Revision, created by the Configuration:
kubectl get revisions grpc-ping-gcltn -o yaml
apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
annotations:
serving.knative.dev/creator: jenkins
serving.knative.dev/lastPinned: "1589334954"
creationTimestamp: "2020-05-13T01:55:44Z"
generateName: grpc-ping-
generation: 1
labels:
serving.knative.dev/configuration: grpc-ping
serving.knative.dev/configurationGeneration: "1"
serving.knative.dev/route: grpc-ping
serving.knative.dev/service: grpc-ping
name: grpc-ping-gcltn
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Configuration
name: grpc-ping
uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613
resourceVersion: "2201933"
selfLink: /apis/serving.knative.dev/v1/namespaces/default/revisions/grpc-ping-gcltn
uid: d3d13c00-1aa9-44e6-979d-60b50d40b519
spec:
containerConcurrency: 0
containers:
- image: docker.io/iyacontrol/grpc-ping-go
name: user-container
ports:
- containerPort: 8080
name: h2c
readinessProbe:
successThreshold: 1
tcpSocket:
port: 0
resources: {}
timeoutSeconds: 300
status:
conditions:
- lastTransitionTime: "2020-05-13T01:56:54Z"
message: The target is not receiving traffic.
reason: NoTraffic
severity: Info
status: "False"
type: Active
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: ContainerHealthy
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: Ready
- lastTransitionTime: "2020-05-13T01:55:54Z"
status: "True"
type: ResourcesAvailable
imageDigest: index.docker.io/iyacontrol/grpc-ping-go@sha256:bfe8362fd0f7ccf18502688baca084b6ea63b5725bfef287d8d7dcef9320a17b
logUrl: http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'d3d13c00-1aa9-44e6-979d-60b50d40b519',type:phrase))))
observedGeneration: 1
serviceName: grpc-ping-gcltn
测试服务
测试gRPC服务需要使用根据服务器使用的相同protobuf定义构建的gRPC客户端。
Dockerfile构建客户端二进制文件。要运行客户端,您将使用为服务器部署的相同容器镜像,并带有对entrypoint命令的覆盖,以使用客户端二进制文件而不是服务器二进制文件。
将{username}替换为您的Docker Hub用户名,然后运行以下命令:
docker run --rm {username}/grpc-ping-go \
/client \
-server_addr="grpc-ping.default.serverless.xx.me:80" \
-insecure
使用容器标签{username} / grpc-ping-go之后的参数代替Dockerfile CMD语句中定义的入口点命令。
运行测试之后有类似如下输出:
2020/05/13 02:06:43 Ping got hello - pong
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466829984 +0000 UTC m=+1.361228108
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466881062 +0000 UTC m=+1.361279193
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466890156 +0000 UTC m=+1.361288283
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466896804 +0000 UTC m=+1.361294929
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466908132 +0000 UTC m=+1.361306260
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466915748 +0000 UTC m=+1.361313871
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466926437 +0000 UTC m=+1.361324564
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466934259 +0000 UTC m=+1.361332383
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466945454 +0000 UTC m=+1.361343587
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466953871 +0000 UTC m=+1.361351996
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.4669644 +0000 UTC m=+1.361362524
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466971662 +0000 UTC m=+1.361369790
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466985621 +0000 UTC m=+1.361383746
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466993072 +0000 UTC m=+1.361391202
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467000507 +0000 UTC m=+1.361398632
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467007443 +0000 UTC m=+1.361405566
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467026014 +0000 UTC m=+1.361424141
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467034894 +0000 UTC m=+1.361433022
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467044127 +0000 UTC m=+1.361442256
2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467052183 +0000 UTC m=+1.361450308
测试通过。
结论
当我们部署一个gRPC项目的时候,需要在service中spec的port做一些特殊的处理
ports:
- name: h2c
containerPort: 8080
name 是h2c。h2 is HTTP/2 over TLS (protocol negotiation via ALPN),h2c is HTTP/2 over TCP。