Spring Cloud Sleuth简介:Spring Cloud Sleuth是Spring Cloud实施分布式链路跟踪解决方案,大量借用Dapper,Zipkin和HTrace。对于大多数用户来说,链路应该是隐形的,并且所有与外部系统的交互都应该自动进行检测。您可以简单地在日志中捕获数据,也可以将数据发送到远程收集服务。
本文主要对Spring Cloud Sleuth结合Zipkin实现链路跟踪进行简单总结,其中SpringBoot使用的2.2.2.RELEASE
版本,SpringCloud使用的Hoxton.SR1
版本。这里将沿用SpringCloud 服务注册与发现Eureka Hoxton版本的eureka-server
作为注册中心,并将SpringCloud 声明式服务调用OpenFeign Hoxton版本的openfeign
的内容复制到两个新建项目里面,在此基础上新增几个接口实现两个服务的互相调用并添加Zipkin支持来实现链路跟踪,具体的接口和服务调用相关配置可以看本文末尾贴出的代码示例,这里就不展示了。
Zipkin是Twitter的一个开源项目,是基于Google Dapper实现的,可以使用它来收集各个服务器上请求链路的跟踪数据,并通过它提供的REST API接口来辅助查询跟踪数据以实现对分布式系统的监控程序。下面就开始搭建Zipkin服务端,在SpringBoot 2.0版本以后已经不需要自行搭建zipkin-server
,而是直接使用官方提供的编译好的jar包,在终端使用以下命令下载并启动zipkin-server
:
curl -sSL https://zipkin.io/quickstart.sh | bash -s
java -jar zipkin.jar
访问http://zipkin服务器ip:9411
,结果见下图,出现Zipkin页面说明部署成功。
docker方式可以通过以下命令创建zipkin容器:
docker run -d --name myzipkin -p 9411:9411 openzipkin/zipkin
通过Maven分别创建名为spring-cloud-sleuth-zipkin-a
和spring-cloud-sleuth-zipkin-b
的项目,并将openfeign
的内容复制到这两个项目,在此基础上进行改造,两个项目的服务名分别为sleuth-zipkin-a
和sleuth-zipkin-b
。
还需引入以下依赖:
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-zipkinartifactId>
dependency>
在application.yml
中新增以下配置:
spring:
sleuth:
sampler:
# 指定sleuth的采样率,默认0.1,设置为1.0表示日志全量传递到服务端
# 开发测试环境建议设置为1.0
probability: 0.1
zipkin:
# 指定zipkin服务端连接地址,以http的方式提交到zipkin
base-url: http://这里可以填自己的zipkin服务器ip:9411
IDEA启动eureka-server
集群,sleuth-zipkin-a
和sleuth-zipkin-b
,访问注册中心,下图为服务注册信息:
访问http://localhost:9800/toSleuthZipkinB2,发现接口已成功访问:
sleuth-zipkin-a
中控制台打印的日志如下:
2020-03-17 15:26:42.946 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ---> GET http://sleuth-zipkin-b/toSleuthZipkinA HTTP/1.1
2020-03-17 15:26:42.946 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] Accept-Encoding: gzip
2020-03-17 15:26:42.946 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] Accept-Encoding: deflate
2020-03-17 15:26:42.946 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ---> END HTTP (0-byte body)
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] <--- HTTP/1.1 200 (247ms)
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] connection: keep-alive
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] content-length: 22
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] content-type: text/plain;charset=UTF-8
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] date: Tue, 17 Mar 2020 07:26:43 GMT
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] keep-alive: timeout=60
2020-03-17 15:26:43.193 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2]
2020-03-17 15:26:43.194 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ip:10.0.0.11,port:9800
2020-03-17 15:26:43.194 DEBUG [sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false] 3476 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] <--- END HTTP (22-byte body)
sleuth-zipkin-b
中控制台打印的日志如下:
2020-03-17 15:26:43.143 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ---> GET http://sleuth-zipkin-a/home HTTP/1.1
2020-03-17 15:26:43.143 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] Accept-Encoding: gzip
2020-03-17 15:26:43.143 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] Accept-Encoding: deflate
2020-03-17 15:26:43.143 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ---> END HTTP (0-byte body)
2020-03-17 15:26:43.174 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] <--- HTTP/1.1 200 (30ms)
2020-03-17 15:26:43.174 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] connection: keep-alive
2020-03-17 15:26:43.174 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] content-length: 22
2020-03-17 15:26:43.174 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] content-type: text/plain;charset=UTF-8
2020-03-17 15:26:43.174 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] date: Tue, 17 Mar 2020 07:26:43 GMT
2020-03-17 15:26:43.175 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] keep-alive: timeout=60
2020-03-17 15:26:43.175 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA]
2020-03-17 15:26:43.175 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ip:10.0.0.11,port:9800
2020-03-17 15:26:43.175 DEBUG [sleuth-zipkin-b,4bd7d6fb618c324e,2abd8338aa3184f4,false] 20556 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] <--- END HTTP (22-byte body)
日志中[sleuth-zipkin-a,4bd7d6fb618c324e,04b6bdd618fb1d67,false]
的第一个值表示服务的名称,第二个值为Spring Cloud Sleuth生成的Trace ID,用来标识一条请求链路,第三个值为Spring Cloud Sleuth生成的Span ID,用来表示一个基本的工作单元,第四个值表示是否将链路跟踪信息输出到Zipkin服务端,这里为false表示链路跟踪信息没有输出到Zipkin服务端,因为采样率设置的0.1,并不是所有的跟踪信息都会输出至Zipkin服务端。下面在zipkin服务端搜索服务,结果见下图,发现搜索不到服务,此次链路信息没输出至zipkin服务端:
下面将采样率改为1.0,重启sleuth-zipkin-a
和sleuth-zipkin-b
,再次访问http://localhost:9800/toSleuthZipkinB2,sleuth-zipkin-a
中控制台打印的日志如下:
2020-03-17 15:34:07.098 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ---> GET http://sleuth-zipkin-b/toSleuthZipkinA HTTP/1.1
2020-03-17 15:34:07.098 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] Accept-Encoding: gzip
2020-03-17 15:34:07.098 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] Accept-Encoding: deflate
2020-03-17 15:34:07.098 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ---> END HTTP (0-byte body)
2020-03-17 15:34:07.339 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] <--- HTTP/1.1 200 (241ms)
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] connection: keep-alive
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] content-length: 22
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] content-type: text/plain;charset=UTF-8
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] date: Tue, 17 Mar 2020 07:34:07 GMT
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] keep-alive: timeout=60
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2]
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] ip:10.0.0.11,port:9800
2020-03-17 15:34:07.340 DEBUG [sleuth-zipkin-a,82839ec582de0651,132c4fa91db21974,true] 20536 --- [euth-zipkin-b-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinB2] <--- END HTTP (22-byte body)
sleuth-zipkin-b
中控制台打印的日志如下:
2020-03-17 15:34:07.288 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ---> GET http://sleuth-zipkin-a/home HTTP/1.1
2020-03-17 15:34:07.288 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] Accept-Encoding: gzip
2020-03-17 15:34:07.288 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] Accept-Encoding: deflate
2020-03-17 15:34:07.288 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ---> END HTTP (0-byte body)
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] <--- HTTP/1.1 200 (33ms)
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] connection: keep-alive
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] content-length: 22
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] content-type: text/plain;charset=UTF-8
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] date: Tue, 17 Mar 2020 07:34:07 GMT
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] keep-alive: timeout=60
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA]
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] ip:10.0.0.11,port:9800
2020-03-17 15:34:07.321 DEBUG [sleuth-zipkin-b,82839ec582de0651,3ed4a03f0248380f,true] 8728 --- [euth-zipkin-a-1] c.r.feignclient.SleuthZipkinClient : [SleuthZipkinClient#toSleuthZipkinA] <--- END HTTP (22-byte body)
再次访问zipkin服务端,发现已经有链路跟踪信息:
点击链路可以查看链路深度,通过每个服务的耗时,持续时间等详细信息,下图为这次请求的链路跟踪详细信息:
点击左上依赖分析图标,可以查看链路的依赖关系,下面的动图为这次请求链路的依赖关系:
由于Zipkin的数据默认存储在内存中,在生产环境中,只要Zipkin服务关闭重启或者服务出现故障以后Zipkin服务中的历史数据就会丢失,所以需要对Zipkin中的数据进行持久化。这里就通过Elasticsearch存储链路跟踪数据并通过Kibana对Elasticsearch中存储的跟踪数据进行可视化。首先需要安装Elasticsearch和Kibana,笔者这里采用的docker的方式安装,具体过程可以参考Docker常见应用部署中的Elasticsearch部署和Kibana部署。
关于Zipkin服务的配置项可以查看https://github.com/openzipkin/zipkin/blob/master/zipkin-server/src/main/resources/zipkin-server-shared.yml。
在安装完成后,修改Zipkin服务的启动参数,可以通过设置环境变量的方式传入参数重新启动:
STORAGE_TYPE=elasticsearch ES_HOSTS=localhost:9200 java -jar zipkin.jar
或者通过设置属性的方式传入参数重新启动:
java -jar zipkin.jar --zipkin.storage.type=elasticsearch --zipkin.storage.elasticsearch.hosts=localhost:9200
访问elasticsearch-head插件地址,结果见下图,Zipkin服务在Elasticsearch中自动创建了一个索引:
进入Kibana管理页面,在创建index pattern时添加了zipkin-span-2020-03-17索引后,重启sleuth-zipkin-a
和sleuth-zipkin-b
,不断访问两个服务对外提供的接口进行互相调用,根据下图可见Kinbana已经有了跟踪数据并进行了可视化,并且在重启Zipkin服务之后访问Zipkin管理页面发现链路跟踪信息还在,说明使用Elasticsearch存储链路跟踪数据成功实现持久化。
下面是建表文件的官方地址:
https://github.com/openzipkin/zipkin/blob/master/zipkin-storage/mysql-v1/src/main/resources/mysql.sql
navicat导入建表文件,会生成下图中的三张表:
修改Zipkin服务的启动参数,重新启动:
java -jar zipkin.jar --zipkin.storage.type=mysql --zipkin.storage.mysql.host=localhost --zipkin.storage.mysql.port=3306 --zipkin.storage.mysql.username=root --zipkin.storage.mysql.password=123456 --zipkin.storage.mysql.db=zipkin
在Zipkin服务启动后,不断访问两个服务对外提供的接口进行互相调用,然后查看mysql,下图为zipkin_annotations表的数据:
下图为zipkin_spans表中的数据,发现链路跟踪数据已经成功存储进mysql数据库了:
然后重启Zipkin服务端,访问Zipkin管理页面,下图为访问结果,发现之前的链路跟踪信息还在,说明使用mysql存储链路跟踪数据成功实现持久化。
默认情况Zipkin客户端和Server之间是使用HTTP请求的方式进行通信,属于同步通信,如果出现如网络波动等异常情况导致Zipkin服务端不可用时,链路跟踪消息可能不能及时到达。而Zipkin支持与RabbitMQ或Kafka结合实现异步消通信。
修改Zipkin服务的启动参数,重新启动:
java -jar zipkin.jar --zipkin.collector.rabbitmq.addresses=可以填自己的RabbitMQ服务器ip:5672 --zipkin.collector.rabbitmq.username=admin --zipkin.collector.rabbitmq.password=admin
在需要链路跟踪的服务中新增以下依赖:
<dependency>
<groupId>org.springframework.amqpgroupId>
<artifactId>spring-rabbitartifactId>
dependency>
在需要链路跟踪的服务的application.yml
中进行如下配置:
spring:
sleuth:
sampler:
# 指定sleuth的采样率,默认0.1,设置为1.0表示日志全量传递到服务端
# 开发测试环境建议设置为1.0
probability: 1.0
zipkin:
# # 指定zipkin服务端连接地址,以http的方式提交到zipkin
# base-url: http://这里可以填自己的zipkin服务器ip:9411
sender:
# 指定rabbitmq的方式
type: rabbit
rabbitmq:
# 指定rabbitmq的host
host: 这里可以填自己的rabbitmq服务器ip
# 指定rabbitmq的端口
port: 5672
# 指定rabbitmq的用户名
username: admin
# 指定rabbitmq的密码
password: admin
重启sleuth-zipkin-a
和sleuth-zipkin-b
,访问RabbitMQ的管理页面,发现已经自动生成了一个名为zipkin的对列:
不断访问两个服务对外提供的接口进行互相调用,访问Zipkin管理页面和RabbitMQ管理页面,下图为结果,发现Zipkin中有链路跟踪信息,而RabbitMQ中也有消息处理:
然后停掉Zipkin服务端再次不断访问两个服务对外提供的接口,如下图所示,发现RabbitMQ的zipkin队列中积累了消息,说明结合RabbitMQ实现异步通信成功。
进入Kafka的bin目录下,创建名为zipkin的topic:
./kafka-topics.sh --create --zookeeper 可以填自己的zookeeper服务器ip:2181 --config max.message.bytes=12800000 --config flush.messages=1 --replication-factor 1 --partitions 1 --topic zipkin
修改Zipkin服务的启动参数,重新启动,不指定topic默认为zipkin:
KAFKA_BOOTSTRAP_SERVERS=可以填自己的Kafka服务器ip:9092 KAFKA_TOPIC=zipkin KAFKA_GROUP_ID=zipkin java -Dzipkin.collector.kafka.overrides.auto.offset.reset=latest -jar zipkin.jar
在需要链路跟踪的服务中新增以下依赖:
<dependency>
<groupId>org.springframework.kafkagroupId>
<artifactId>spring-kafkaartifactId>
dependency>
在需要链路跟踪的服务的application.yml
中进行如下配置:
spring:
sleuth:
sampler:
# 指定sleuth的采样率,默认0.1,设置为1.0表示日志全量传递到服务端
# 开发测试环境建议设置为1.0
probability: 1.0
zipkin:
# # 指定zipkin服务端连接地址,以http的方式提交到zipkin
# base-url: http://这里可以填自己的zipkin服务器ip:9411
sender:
# 指定kafka的方式
type: kafka
kafka:
# 指定kafka的topic,默认为zipkin
topic: zipkin
kafka:
# 指定host/port列表,用于建立和Kafka服务器的连接
bootstrap-servers: 这里可以填自己的Kafka服务器ip:9092
通过Kafka命令行客户端创建一个消费者监视消息消费情况:
./kafka-console-consumer.sh --bootstrap-server 可以填自己的Kafka服务器ip:9092 --topic zipkin --from-beginning
重启sleuth-zipkin-a
和sleuth-zipkin-b
,不断访问两个服务对外提供的接口进行互相调用,访问Zipkin管理页面,结果见下图,发现Zipkin中有链路跟踪信息:
查看消费者中的消费情况,结果见下图,发现有链路跟踪信息:
然后停掉Zipkin服务端再次不断访问两个服务对外提供的接口,查看消息堆积情况:
./kafka-consumer-groups.sh --bootstrap-server 可以填自己的Kafka服务器ip:9092 --describe --group zipkin
如下图所示,发现Kafka中积累了消息,说明结合Kafka实现异步通信成功。
代码示例