istio 服务网格_如何在Kubernetes上使用Istio服务网格设置JHipster微服务

istio 服务网格

by Deepu K Sasidharan

通过Deepu K Sasidharan

如何在Kubernetes上使用Istio服务网格设置JHipster微服务 (How to set up JHipster microservices with Istio service mesh on Kubernetes)

You can find a more up to date version of this post that uses JHipster 6 and latest Istio & Kubernetes versions here.

您可以在此处找到使用JHipster 6和最新Istio&Kubernetes版本的此帖子的最新版本。



Istio is the coolest kid on the DevOps and Cloud block now. For those of you who aren’t following close enough — Istio is a service mesh for distributed application architectures, especially the ones that you run on the cloud with Kubernetes. Istio plays extremely nice with Kubernetes, so nice that you might think that it’s part of Kubernetes.

Istio现在是DevOps和Cloud领域最酷的孩子。 对于那些关注不够的人来说-Istio是用于分布式应用程序体系结构的服务网格 ,尤其是您使用Kubernetes在云上运行的体系结构。 Istio与Kubernetes玩得非常好,以至于您可能认为它是Kubernetes的一部分。

If you are still wondering, what the heck is a service mesh or Istio? then let's have an overview of Istio.

如果您仍然想知道,到底什么是服务网格或Istio? 接下来让我们来概述一下Istio。

Istio provides the following functionality in a distributed application architecture:

Istio在分布式应用程序体系结构中提供以下功能:

  • Service discovery — Traditionally provided by platforms like Netflix Eureka or Consul.

    服务发现-传统上由Netflix Eureka或Consul之类的平台提供。

  • Automatic load balancing — You might have used Netflix Zuul for this.

    自动负载平衡—您可能为此使用了Netflix Zuul 。

  • Routing, circuit breaking, retries, fail-overs, fault injection — Think of Netflix Ribbon, Hytrix and so on.

    路由,电路中断,重试,故障转移 ,故障注入—想想Netflix Ribbon , Hytrix等。

  • Policy enforcement for access control, rate limiting, A/B testing, traffic splits, and quotas — Again you might have used Zuul to do some of these.

    用于访问控制,速率限制,A / B测试,流量拆分和配额的策略实施—您可能再次使用Zuul来完成其中一些任务。
  • Metrics, logs, and traces — Think of ELK or Stack driver

    指标,日志和跟踪-考虑ELK或堆栈驱动程序

  • Secure service-to-service communication

    安全的服务间通信

Below is the architecture of Istio.

下面是Istio的体系结构。

It can be classified into 2 distinct planes.

它可以分为2个不同的平面。

Data plane: Is made of Envoy proxies deployed as sidecars to the application containers. They control all the incoming and outgoing traffic to the container.

数据平面 :由Envoy代理制成,作为代理部署到应用程序容器。 它们控制到容器的所有传入和传出流量。

Control plane: It uses Pilot to manages and configure the proxies to route traffic. It also configures Mixer to enforce policies and to collect telemetry. It also has other components like Citadel, to manage security, and Galley, to manage configurations.

控制平面 :它使用Pilot来管理和配置代理以路由流量。 它还将Mixer配置为强制执行策略并收集遥测。 它还具有其他组件,例如用于管理安全性的Citadel和用于管理配置的Galley。

Istio also configures an instance of Grafana, Prometheus and Jaeger for Monitoring and Observability. You can use this or use your existing monitoring stack as well.

Istio还为监视和可观察性配置了Grafana , Prometheus和Jaeger的实例。 您可以使用它,也可以使用现有的监视堆栈。

I hope this provides an overview of Istio, now let's focus on the goal of this article.

希望本文提供了Istio的概述,现在让我们关注本文的目标。

Devoxx 2018 (Devoxx 2018)

I did a talk at Devoxx 2018 along with Julien Dubois doing the same demo and promised that I’d write a detailed blog about it.

我在Devoxx 2018上与Julien Dubois进行了一次演示,并做了同样的演示,并保证我会为此写一个详细的博客。

You can watch the video to see JHipster + Istio in action.

您可以观看视频以查看JHipster + Istio的运行情况。

You can watch the slides on Speaker Deck as well.

您也可以在Speaker Deck上观看幻灯片。

准备Kubernetes集群 (Preparing the Kubernetes cluster)

First, let us prepare a Kubernetes cluster to deploy Istio and our application containers. Follow the instructions for any one of the platforms you prefer.

首先,让我们准备一个Kubernetes集群以部署Istio和我们的应用程序容器。 请按照您喜欢的任何平台上的说明进行操作。

先决条件 (Prerequisites)

kubectl: The command line tool to interact with Kubernetes. Install and configure it.

kubectl :与Kubernetes交互的命令行工具。 安装并配置它。

在Azure Kubernetes服务(AKS)上创建群集 (Create a cluster on Azure Kubernetes Service(AKS))

If you are going to use Azure, then install Azure CLI to interact with Azure. Install and log in with your Azure account (you can create a free account if you don’t have one already).

如果要使用Azure,请安装Azure CLI与Azure进行交互。 安装并使用您的Azure帐户登录(如果您还没有免费帐户,则可以创建一个免费帐户 )。

First let us create a resource group. You can use any region you like here instead of East US.

首先让我们创建一个资源组。 您可以在这里使用任何您喜欢的地区,而不是在美国东部。

$ az group create --name eCommerceCluster --location eastus

Create the Kubernetes cluster:

创建Kubernetes集群:

$ az aks create \
--resource-group eCommerceCluster \
--name eCommerceCluster \
--node-count 4 \
--kubernetes-version 1.11.4 \
--enable-addons monitoring \
--generate-ssh-keys

The node-count flag is important as the setup requires at least four nodes with the default CPU to run everything. You can try to use a higher kubernetes-version if it is supported, else stick to 1.11.4

node-count标志很重要,因为安装程序至少需要四个带有默认CPU的节点才能运行所有程序。 如果支持,您可以尝试使用更高kubernetes-version ,否则请坚持使用1.11.4

The cluster creation could take while so sit back and relax.

创建集群可能需要一些时间,因此请放松休息。

Once the cluster is created, fetch its credentials to be used from kubectl by running the below command. It automatically injects the credentials to your kubectl configuration under ~/.kube/config

创建集群后,通过运行以下命令从kubectl获取其凭据。 它会自动注入的凭据到您kubectl〜/ .kube / config配置

$ az aks get-credentials \
--resource-group eCommerceCluster \
--name eCommerceCluster

You can view the created cluster in the Azure portal:

您可以在Azure门户中查看创建的群集:

Run kubectl get nodes to see it in the command line and to verify that kubectl can connect to your cluster.

运行kubectl get nodes以在命令行中查看它,并验证kubectl是否可以连接到您的集群。

Proceed to the Install and setup Istio section.

进入“ 安装和设置Istio”部分。

在Google Kubernetes Engine(GKE)上创建集群 (Create a cluster on Google Kubernetes Engine(GKE))

If you are going to use Google Cloud Platform(GCP) then install Gcloud CLIto interact with GCP. Install and log in with your GCP account (you can create a free account if you don’t have one already).

如果您要使用Google Cloud Platform(GCP),请安装Gcloud CLI与GCP进行交互。 安装并使用您的GCP帐户登录(如果您还没有免费帐户,则可以创建一个免费帐户 )。

First, we need a GCP project, you can either use an existing project that you have or create a new one using GCloud CLI with below command:

首先,我们需要一个GCP项目,您可以使用现有的项目,也可以使用GCloud CLI通过以下命令创建一个新项目:

$ gcloud projects create jhipster-demo-deepu

Set the project you want to use as the default project:

设置要用作默认项目的项目:

$ gcloud config set project jhipster-demo-deepu

Now let us create a cluster for our application with the below command:

现在,让我们使用以下命令为应用程序创建集群:

$ gcloud container clusters create hello-hipster \

   --cluster-version 1.10 \
   
   --num-nodes 4 \
   
   --machine-type n1-standard-2

The num-nodes and machine-type flags are important as the setup requires at least four nodes with a bigger CPU to run everything. You can try to use a higher cluster-version if it is supported, else stick to 1.10.

num-nodesmachine-type标志很重要,因为该设置至少需要四个具有更大CPU的节点才能运行所有内容。 如果支持,可以尝试使用更高的cluster-version ,否则请坚持使用1.10。

The cluster creation could take while so sit back and relax.

创建集群可能需要一些时间,因此请放松休息。

Once the cluster is created, fetch its credentials to be used from kubectl by running the below command. It automatically injects the credentials to your kubectl configuration under ~/.kube/config

创建集群后,通过运行以下命令从kubectl获取其凭据。 它会自动注入的凭据到您kubectl〜/ .kube / config配置

$ gcloud container clusters get-credentials hello-hipster

You can view the created cluster in the GCP GUI.

您可以在GCP GUI中查看创建的集群。

Run kubectl get nodes to see it in the command line and to verify that kubectl can connect to your cluster.

运行kubectl get nodes以在命令行中查看它,并验证kubectl是否可以连接到您的集群。

安装和设置Istio (Install and setup Istio)

Install Istio on your machine by following these steps:

请按照以下步骤在计算机上安装Istio:

$ cd ~/

$ export ISTIO_VERSION=1.0.2

$ curl -L https://git.io/getLatestIstio | sh -

$ ln -sf istio-$ISTIO_VERSION istio

$ export PATH=~/istio/bin:$PATH

Make sure to use version 1.0.2 since the latest version seems to have issues connecting to the MySQL database containers.

确保使用版本1.0.2,因为最新版本似乎在连接到MySQL数据库容器时出现问题。

Now let us install Istio on our Kubernetes cluster by applying the provided Kubernetes manifests and helm templates from Istio.

现在,通过应用Istio提供的Kubernetes清单和头盔模板,在Kubernetes集群上安装Istio。

$ kubectl apply -f ~/istio/install/kubernetes/helm/istio/templates/crds.yaml
$ kubectl apply -f ~/istio/install/kubernetes/istio-demo.yaml \
    --as=admin --as-group=system:masters

Wait for the pods to run, these will be deployed to the istio-system namespace.

等待pod运行,它们将被部署到istio-system名称空间。

$ watch kubectl get pods -n istio-system

Once the pods are in running status, exit the watch loop and run the below to get the Ingress gateway service details. This is the only service that is exposed to an external IP.

窗格处于运行状态后,请退出监视循环并运行以下内容以获取Ingress网关服务的详细信息。 这是公开给外部IP的唯一服务。

$ kubectl get svc istio-ingressgateway -n istio-system

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP
istio-ingressgateway   LoadBalancer   10.27.249.83   35.195.81.130

The external IP is very important here, let us save this to an environment variable so that we can use it in further commands.

外部IP在这里非常重要,让我们将其保存到环境变量中,以便我们可以在其他命令中使用它。

$ export \
  INGRESS_IP=$(kubectl -n istio-system get svc \
  istio-ingressgateway \
  -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Now our Kubernetes cluster is ready for Istio.

现在我们的Kubernetes集群已经为Istio做好了准备。

For advanced Istio setup options refer to https://istio.io/docs/setup/kubernetes/

有关高级Istio设置选项的信息,请参阅 https://istio.io/docs/setup/kubernetes/

创建微服务应用程序堆栈 (Creating the microservice application stack)

In one of my previous posts, I showcased how to create a full stack microservice architecture using JHipster and JDL. You can read the post here if you want to learn more details about it. For this exercise, we will use the same application but we will not use the Eureka service discovery option we used earlier. Also, note that the store application is further split into Gateway and Product applications.

在我以前的一篇文章中 ,我展示了如何使用JHipsterJDL创建全栈微服务架构。 如果您想了解更多信息,可以在这里阅读。 在本练习中,我们将使用相同的应用程序,但不会使用之前使用的Eureka服务发现选项。 另外,请注意,商店应用程序进一步分为网关和产品应用程序。

建筑 (Architecture)

Here is the architecture of the microservice that we are going to create and deploy today.

这是我们今天将要创建和部署的微服务的体系结构。

It has a gateway application and three microservice applications. Each of them has its own database. You can see that each application has an Envoy proxy attached to the pod as a sidecar. Istio control plane components are also deployed to the same cluster along with Prometheus, Grafana, and Jaeger.

它具有一个网关应用程序和三个微服务应用程序。 他们每个人都有自己的数据库。 您可以看到每个应用程序都有一个Envoy代理作为附加工具附加到了pod上。 Istio控制平面组件也与Prometheus,Grafana和Jaeger一起部署到同一群集中。

The Ingress gateway from Istio is the only entry point for traffic and it routes traffic to all microservices accordingly. Telemetry is collected from all the containers running in the cluster, including the applications, databases, and Istio components.

Istio的Ingress网关是流量的唯一入口点,它会将流量路由到所有微服务。 遥测是从群集中运行的所有容器(包括应用程序,数据库和Istio组件)收集的。

Compared to the architecture of the original application here, you can clearly see that we replaced the JHipster registry and Netflix OSS components with Istio. The ELK monitoring stack is replaced with Prometheus, Grafana and Jaeger configured by Istio. Here is the original architecture diagram without Istio for a quick visual comparison.

与此处原始应用程序的体系结构相比,您可以清楚地看到我们用Istio替换了JHipster注册表和Netflix OSS组件。 ELK监视堆栈已由Istio配置的Prometheus,Grafana和Jaeger取代。 这是不带Istio的原始体系结构图,用于快速的视觉比较。

应用程序JDL (Application JDL)

Let’s take a look at the modified JDL declaration. You can see that we have declared serviceDiscoveryType no here since we will be using Istio for that.

让我们看一下修改后的JDL声明。 您可以看到此处已将serviceDiscoveryType no声明为serviceDiscoveryType no ,因为我们将为此使用Istio。

部署JDL (Deployment JDL)

JHipster version 5.7.0 introduced support for deployment declaration straight in the JDL

JHipster 5.7.0版直接在JDL中引入了对部署声明的支持

We have the below in our JDL which declares our Kubernetes deployment:

我们的JDL中包含以下内容,用于声明我们的Kubernetes部署:

deployment {
  deploymentType kubernetes
  appsFolders [store, invoice, notification, product]
  dockerRepositoryName "deepu105"
  serviceDiscoveryType no
  istio autoInjection
  istioRoute true
  kubernetesServiceType Ingress
  kubernetesNamespace jhipster
  ingressDomain "35.195.81.130.nip.io"
}

The serviceDiscoveryType is disabled and we have enabled Istio with autoInjection support — the Envoy sidecars are injected automatically for the selected applications. Istio routes are also generated for the applications by enabling istioRoute option.

serviceDiscoveryType被禁用,我们启用Istio与autoInjection支持-特使侧柜将根据所选应用自动注入。 通过启用istioRoute选项,还会为应用程序生成Istio路由。

The kubernetesServiceType is set as Ingress, which is very important as Istio can only work with an Ingress controller service type. For Ingress, we need to set the domain DNS and this is where the Istio ingress gateway IP is needed. Now we need a DNS for our IP. For real usecases, you should map a DNS for the IP, but for testing and demo purposes we can use a wildcard DNS service like nip.io to resolve our IP. Just append nip.io to our IP and use that as the ingress domain.

kubernetesServiceType设置为Ingress ,这非常重要,因为Istio只能使用Ingress控制器服务类型。 对于Ingress,我们需要设置域DNS,这是需要Istio Ingress网关IP的地方。 现在我们需要一个DNS作为IP。 对于实际用例,您应该为IP映射DNS,但是出于测试和演示目的,我们可以使用通配符DNS服务(例如nip.io)来解析IP。 只需将nip.io附加到我们的IP并将其用作入口域即可。

生成应用程序和部署清单 (Generate the applications and deployment manifests)

Now that our JDL is ready, let us scaffold our applications and Kubernetes manifests. Create a new directory and save the above JDL in the directory. Let us name it app-istio.jdl and then run the import-jdl command.

现在我们的JDL已经准备就绪,让我们搭建应用程序和Kubernetes清单。 创建一个新目录,并将上面的JDL保存在该目录中。 让我们将其命名为app-istio.jdl ,然后运行import-jdl命令。

$ mkdir istio-demo && cd istio-demo
$ jhipster import-jdl app-istio.jdl

This will generate all the applications and install the required NPM dependencies in each of them. Once the applications are generated the deployment manifests will be generated and some useful instruction will be printed to the console.

这将生成所有应用程序,并在每个应用程序中安装所需的NPM依赖项。 生成应用程序后,将生成部署清单,并将一些有用的指令打印到控制台。

Open the generated code in your favorite IDE/Editor and explore the code.

在您最喜欢的IDE /编辑器中打开生成的代码并浏览代码。

使用Kubectl部署到Kubernetes集群 (Deploy to Kubernetes cluster using Kubectl)

Now let us build and deploy our applications. Run the ./gradlew bootWar -Pprod jibDockerBuild command in the store, product, invoice, and notification folders to build the docker images. Once the images are built, push them to the docker repo with these commands:

现在,让我们构建和部署应用程序。 在商店,产品,发票和通知文件夹中运行./gradlew bootWar -Pprod jibDockerBuild命令以./gradlew bootWar -Pprod jibDockerBuild映像。 生成映像后,使用以下命令将它们推送到docker repo:

$ docker image tag store deepu105/store

$ docker push deepu105/store

$ docker image tag invoice deepu105/invoice

$ docker push deepu105/invoice

$ docker image tag notification deepu105/notification

$ docker push deepu105/notification

$ docker image tag product deepu105/product

$ docker push deepu105/product

Once the images are pushed, navigate into the generated Kubernetes directory and run the provided startup script. (If you are on windows you can run the steps in kubectl-apply.sh manually one by one.)

推送映像后,导航到生成的Kubernetes目录并运行提供的启动脚本。 (如果您在Windows上,则可以一个一个地手动运行kubectl-apply.sh中的步骤。)

$ cd kubernetes
$ ./kubectl-apply.sh

Run watch kubectl get pods -n jhipster to monitor the status.

运行watch kubectl get pods -n jhipster来监视状态。

部署的应用程序 (Deployed applications)

Once all the pods are in running status we can explore the deployed applications

一旦所有Pod都处于运行状态,我们就可以浏览已部署的应用程序

应用网关 (Application gateway)

The store gateway application is the entry point for our microservices. Get the URL for the store app by running echo store.$INGRESS_IP.nip.io, we already stored the INGRESS_IP to environment variables while creating the Istio setup. Visit the URL in your favorite browser and explore the application. Try creating some entities for the microservices:

商店网关应用程序是我们微服务的入口点。 通过运行echo store.$INGRESS_IP.nip.io获取商店应用程序的URL,我们已经在创建Istio设置时将INGRESS_IP存储到环境变量中。 在您喜欢的浏览器中访问URL并浏览应用程序。 尝试为微服务创建一些实体:

监控方式 (Monitoring)

Istio setup includes Grafana and Prometheus configured to collect and show metrics from our containers. Let's take a look.

Istio设置包括Grafana和Prometheus,它们配置为从我们的容器中收集和显示指标。 让我们来看看。

By default, only the Ingress gateway is exposed to external IP and hence we will use kubectl port forwarding to set up a secure tunnel to the required services

默认情况下,仅Ingress网关公开外部IP,因此我们将使用kubectl端口转发来建立通往所需服务的安全隧道

Let us create a tunnel for Grafana:

让我们为Grafana创建一个隧道:

$ kubectl -n istio-system \
port-forward $(kubectl -n istio-system get pod \

-l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000

Open localhost:3000 to view the Grafana dashboard.

打开localhost:3000以查看Grafana仪表板。

Grafana uses the metrics scrapped by Prometheus. We can look at Prometheus directly by creating a tunnel for it and opening localhost:9090:

Grafana使用Prometheus废弃的指标。 我们可以通过为其创建隧道并打开localhost:9090来直接查看Prometheus:

$ kubectl -n istio-system \
port-forward $(kubectl -n istio-system get pod -l \

app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090

可观察性 (Observability)

Istio configures Jaeger for distributed tracing and service graph for service observability. Let us take a look at them.

Istio将Jaeger配置为进行分布式跟踪,并为服务可观察性配置服务图。 让我们看看它们。

Create a tunnel for Jaeger and open localhost:16686

为Jaeger创建隧道并打开localhost:16686

$ kubectl -n istio-system \
port-forward $(kubectl -n istio-system get pod -l \

app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686

You can make some requests in the application and find it in the tracing dashboard by querying for the service. Click on the request to see tracing details:

您可以在应用程序中提出一些请求,并通过查询服务在跟踪仪表板中找到它。 单击请求以查看跟踪详细信息:

Let us now create a tunnel for the service graph and open it in localhost:8080/force/forcegraph.html:

现在让我们为服务图创建一个隧道,并在localhost:8080 / force / forcegraph.html中打开它:

$ kubectl -n istio-system \
port-forward $(kubectl -n istio-system get pod -l \

app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088

结论 (Conclusion)

Istio provides building blocks to build distributed microservices in a more Kubernetes-native way and takes the complexity and responsibility of maintaining those blocks away from you. This means you do not have to worry about maintaining the code or deployments for service discovery, tracing and so on.

Istio提供了构建块,以更Kubernetes原生的方式构建分布式微服务,并承担了使这些块远离您的复杂性和责任。 这意味着您不必担心为服务发现,跟踪等维护代码或部署。

Istio documentation says

Istio文档说

Deploying a microservice-based application in an Istio service mesh allows one to externally control service monitoring and tracing, request (version) routing, resiliency testing, security and policy enforcement, etc., in a consistent way across the services, for the application as a whole.
在Istio服务网格中部署基于微服务的应用程序,可以跨服务以一致的方式从外部控制服务监视和跟踪,请求(版本)路由,弹性测试,安全性和策略执行等,整个。

Werner Vogels (CTO of AWS) quoted at AWS Re:Invent

Werner Vogels(AWS的首席技术官)在AWS Re:Invent上被引用

“In the future, all the code you ever write will be business logic.”
“将来,您编写的所有代码都将成为业务逻辑。”

Istio Service mesh helps with that statement. This lets you worry only about the applications that you are developing and with JHipster that future is truly here and you just need to worry about writing your business logic.

Istio Service网格有助于该语句。 这样,您就不必担心正在开发的应用程序,并且使用JHipster可以真正把握未来,而您只需要担心编写业务逻辑即可。

While this is great, it is not a silver bullet. Keep in mind that Istio is fairly new compared to other stable and battle-tested solutions like JHipster Registry (Eureka) or Consul.

尽管这很棒,但这不是万灵丹。 请记住,与其他稳定且经过考验的解决方案(例如JHipster Registry(Eureka)或Consul)相比,Istio还是相当新的。

Also, another thing to keep in mind is the resource requirements. The same microservices with JHipster Registry or Consul can be deployed to a 2 node cluster with 1 vCPU and 3.75 GB of memory per node in GCP while you need a 4 node cluster with 2 vCPUs and 7.5 GB of memory per node for Istio enabled deployments. The default Kubernetes manifest from Istio doesn’t apply any request limits for resources, and by adding and tuning those, the minimum requirement could be reduced. But still I don’t think you can get it as low as that is needed for the JHipster registry option.

另外,要记住的另一件事是资源需求。 可以将具有JHipster Registry或Consul的微服务部署到GCP中每个节点具有1个vCPU和3.75 GB内存的2节点群集中,而对于启用Istio的部署,则需要具有2个vCPU和每个节点7.5 GB内存的4节点群集。 Istio的默认Kubernetes清单不对资源应用任何请求限制,并且通过添加和调整这些限制,可以降低最低要求。 但是我仍然认为您无法将其降低到JHipster注册表选项所需的水平。

In a real-world use case, the advantages of not having to maintain the complex parts of your infra vs having to pay for more resources might be a decision that has to be taken based on your priorities and goals.

在现实的用例中,不必维护基础结构的复杂部分而不必支付更多资源的优势可能是必须根据您的优先级和目标做出的决定。

A huge shout out to Ray Tsang for helping me figure out an optimal cluster size for this application. Also a huge thank you from myself and the community to both Ray and Srinivasa Vasu for adding the Istio support to JHipster.

曾荫权大声疾呼,以帮助我找出此应用程序的最佳群集大小。 我也非常感谢Ray和Srinivasa Vasu的本人和社区,为JHipster添加了Istio支持。

JHipster provides a great Kubernetes setup to start with which you can further tweak as per your needs and platform. The Istio support is recent and will improve further over time, but it's still a great starting point especially to learn.

JHipster提供了一个很棒的Kubernetes设置,您可以根据自己的需求和平台对其进行进一步的调整。 Istio支持是最近的,并且随着时间的推移会进一步提高,但是尤其是学习仍然是一个很好的起点。

To learn more about JHipster and Full stack development, check out my book “Full Stack Development with JHipster” on Amazon and Packt.

要了解有关JHipster和全栈开发的更多信息,请在Amazon和Packt上阅读我的书“ 使用JHipster进行全栈开发 ”。

There is a great Istio tutorial from Ray Tsang here.

曾T曾在这里刊登了很棒的Istio教程。

If you like JHipster don’t forget to give it a star on Github.

如果您喜欢JHipster,请不要忘记在Github上给它加个星。

If you like this article, please leave some claps (Did you know that you can clap multiple times in Medium? ) I hope to write more about Istio in the near future.

如果您喜欢这篇文章,请留下一些鼓掌(您知道您可以在Medium中多次鼓掌吗?)我希望在不久的将来写更多有关Istio的文章。

You can follow me on Twitter and LinkedIn.

您可以在Twitter和LinkedIn上关注我。

My other related posts:

我的其他相关文章:

  1. Create full Microservice stack using JHipster Domain Language under 30 minutes

    在30分钟内使用JHipster域语言创建完整的微服务堆栈

  2. Deploying JHipster Microservices on Azure Kubernetes Service (AKS)

    在Azure Kubernetes服务(AKS)上部署JHipster微服务

翻译自: https://www.freecodecamp.org/news/jhipster-microservices-with-istio-service-mesh-on-kubernetes-a7d0158ba9a3/

istio 服务网格

你可能感兴趣的:(java,python,大数据,kubernetes,数据库)