前期内容导读:
- 开源加解密RSA/AES/SHA1/PGP/SM2/SM3/SM4介绍
- 开源AES/SM4/3DES对称加密算法介绍及其实现
- 开源AES/SM4/3DES对称加密算法的验证实现
- 开源非对称加密算法RSA/SM2实现及其应用
- 非对称加密算法RSA实现
- 开源非对称加密算法SM2实现
- Java开源接口微服务代码框架
- Json在开源SpringBoot/SpringCloud微服务中的应用
- 加解密在开源SpringBoot/SpringCloud微服务框架的最佳实践
+------------+
| bq-log |
| |
+------------+
Based on SpringBoot
|
|
v
+------------+ +------------+ +------------+ +-------------------+
|bq-encryptor| +-----> | bq-base | +-----> |bq-boot-root| +-----> | bq-service-gateway|
| | | | | | | |
+------------+ +------------+ +------------+ +-------------------+
Based on BouncyCastle Based on Spring Based on SpringBoot Based on SpringBoot-WebFlux
+
|
v
+------------+ +-------------------+
|bq-boot-base| +-----> | bq-service-auth |
| | | | |
+------------+ | +-------------------+
ased on SpringBoot-Web | Based on SpringSecurity-Authorization-Server
|
|
|
| +-------------------+
+-> | bq-service-biz |
| |
+-------------------+
说明:
bq-encryptor
:基于BouncyCastle
安全框架,已开源 ,加解密介绍
,支持RSA
/AES
/PGP
/SM2
/SM3
/SM4
/SHA-1
/HMAC-SHA256
/SHA-256
/SHA-512
/MD5
等常用加解密算法,并封装好了多种使用场景、做好了为SpringBoot所用的准备;bq-base
:基于Spring框架的基础代码框架,已开源 ,支持json
/redis
/DataSource
/guava
/http
/tcp
/thread
/jasypt
等常用工具API;bq-log
:基于SpringBoot框架的基础日志代码,已开源 ,支持接口Access日志、调用日志、业务操作日志等日志文件持久化,可根据实际情况扩展;bq-boot-root
:基于SpringBoot,已开源 ,但是不包含spring-boot-starter-web
,也不包含spring-boot-starter-webflux
,可通用于servlet
和netty
web容器场景,封装了redis
/http
/定时器
/加密机
/安全管理器
等的自动注入;bq-boot-base
:基于spring-boot-starter-web
(servlet,BIO),已开源 ,提供常规的业务服务基础能力,支持PostgreSQL
/限流
/bq-log
/Web框架
/业务数据加密机加密
等可配置自动注入;bq-service-gateway
:基于spring-boot-starter-webflux
(Netty,NIO),已开源 ,提供了Jwt Token安全校验能力,包括接口完整性校验
/接口数据加密
/Jwt Token合法性校验等;bq-service-auth
:基于spring-security-oauth2-authorization-server
,已开源 ,提供了JwtToken生成和刷新的能力;bq-service-biz
:业务微服务参考样例,已开源 ;
+-------------------+
| Web/App Client |
| |
+-------------------+
|
|
v
+--------------------------------------------------------------------+
| | Based On K8S |
| |1 |
| v |
| +-------------------+ 2 +-------------------+ |
| | bq-service-gateway| +-------> | bq-service-auth | |
| | | | | |
| +-------------------+ +-------------------+ |
| |3 |
| +-------------------------------+ |
| v v |
| +-------------------+ +-------------------+ |
| | bq-service-biz1 | | bq-service-biz2 | |
| | | | | |
| +-------------------+ +-------------------+ |
| |
+--------------------------------------------------------------------+
说明:
bq-service-gateway
:基于SpringCloud-Gateway
,用作JwtToken鉴权,并提供了接口、数据加解密的安全保障能力;bq-service-auth
:基于spring-security-oauth2-authorization-server
,提供了JwtToken生成和刷新的能力;bq-service-biz
:基于spring-boot-starter-web
,业务微服务参考样例;k8s
在上述微服务架构中,承担起了服务注册和服务发现的作用,鉴于k8s
云原生环境构造较为复杂,实际开源的代码时,以Nacos
(为主)/Eureka
做服务注册和服务发现中间件;- 以上所有服务都以docker容器作为载体,确保服务有较好地集群迁移和弹性能力,并能够逐步平滑迁移至k8s的终极目标;
- 逻辑架构不等同于物理架构(部署架构),实际业务部署时,还有DMZ区和内网区,本逻辑架构做了简化处理;
链路追踪技术基本上都是Google Dapper,当下有2种不同的实现:
二者的区别:前者需要通过把链路追踪的Java包当做依赖加入到依赖库中;后者则是在执行启动命令时,带上链路追踪的jar包即可,链路监控完全基于字节码增强技术来实现;
当下较多使用的链路追踪框架如下表所示:
链路追踪特性 | Cat | Zipkin | SkyWalking | Pinpoint |
---|---|---|---|---|
调用链可视化 | 有 | 有 | 有 | 有 |
聚合报表 | 非常丰富 | 少 | 较丰富 | 非常丰富 |
服务依赖图 | 简单 | 简单 | 好 | 好 |
埋点方式 | 侵入式 | 侵入式 | 非侵入式,字节码增强 | 非侵入式,字节码增强 |
VM监控指标 | 好 | 无 | 有 | 好 |
支持语言 | java/.net | 丰富 | java/.net/php/go/node.js | java/php/python |
存储机制 | mysql(报表),本地文件/HDFS(调用链) | 内存/redis/es/mysql等 | H2、es | HBase |
社区支持 | 主要在国内 | 国外主流 | Apache支持 | - |
使用案例 | 美团、携程 | 京东、阿里定制后不开源 | 华为、小米 | - |
APM | 是 | 否 | 是 | 是 |
开发基础 | eBay cal | Google Dapper | Google Dapper | Google Dapper |
是否支持WebFlux | 否 | 是 | 是 | 否 |
结合实际情况:
- 我们有SpringCloud-Gateway(基于WebFlux),所以不能使用
Cat
/Pinpoint
;- 我们当下只要加上链路追踪即可,再加上
zipkin
是SpringCloud的亲儿子,对应的SpringCloud组件为SpringCloud-Sleuth,所以此框架优先选用了zipkin
,暂没有必要去使用牛刀SkyWalking
;
综上,我们选择小巧而且与SpringCloud框架最密切的zipkin
作为我们的链路追踪框架,缺点就是它是代码侵入式的,它的变更可能会影响业务稳定。
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-sleuth-zipkinartifactId>
<version>3.1.7version>
dependency>
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-sleuthartifactId>
<version>3.1.7version>
dependency>
工程引入sleuth和zipkin时,可能会存在jar包依赖冲突,尤其是要兼顾webflux时,有兴趣可以看看工程中的真实引用关系。maven冲突的问题就不单独讲了。
java -jar ./zipkin-server/target/zipkin-server-*exec.jar
spring:
sleuth:
sampler:
#采样率值介于0到1之间,1则表示全部采集
probability: 1
zipkin:
#Zipkin的访问地址
base-url: http://localhost:9411
logging:
name: ${spring.application.name}
config: classpath:logback-spring.xml
basedir: /***/logs/${spring.application.name}/
format: "%d{yy-MM-dd HH:mm:ss.SSS}[${spring.application.name}][Tid:%X{traceId:-},Sid:%X{spanId:-}][%level][%logger{20}_%M] - %msg%n"
<configuration debug="false">
<springProperty scope="context" name="LOG_SERVICE" source="spring.application.name" defaultValue="bq-service"/>
<springProperty scope="context" name="INSTANCE_ID" source="server.port" defaultValue="8080"/>
<springProperty scope="context" name="BASE_LOG_PATH" source="logging.basedir" defaultValue="/temp/${LOG_SERVICE}"/>
<springProperty scope="context" name="LOG_LEVEL" source="log.level.ROOT" defaultValue="INFO"/>
<springProperty scope="context" name="LOG_PATTERN" source="logging.format" defaultValue="%msg%n"/>
<springProperty scope="context" name="MAX_FILE_SIZE" source="logging.file-size" defaultValue="100MB"/>
<property name="LOG_PATH" value="${BASE_LOG_PATH}/${LOG_SERVICE}_${INSTANCE_ID}"/>
<appender name="accessAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/access.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_PATH}/%d{yy-MM-dd}/access-%d{yy-MM-dd}.logFileNamePattern>
<maxHistory>30maxHistory>
rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${AUDIT_LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="defaultAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/default.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_PATH}/%d{yy-MM-dd}/default-%d{yy-MM-dd}.logFileNamePattern>
<maxHistory>30maxHistory>
rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="errorAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/error.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_PATH}/%d{yy-MM-dd}/error-%d{yy-MM-dd}.logFileNamePattern>
<maxHistory>30maxHistory>
rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERRORlevel>
<onMatch>ACCEPTonMatch>
<onMismatch>DENYonMismatch>
filter>
appender>
<logger name="com.biuqu" additivity="false">
<appender-ref ref="consoleAppender"/>
<appender-ref ref="defaultAppender"/>
logger>
<logger name="com.biuqu.boot.model.MdcAccessLogValve" additivity="false">
<appender-ref ref="accessAppender"/>
logger>
<logger name="com.biuqu.boot.handler.GlobalExceptionHandler" additivity="false">
<appender-ref ref="errorAppender"/>
<appender-ref ref="defaultAppender"/>
logger>
<root level="${LOG_LEVEL}">
<appender-ref ref="consoleAppender"/>
<appender-ref ref="defaultAppender"/>
root>
configuration>
仔细观察就可以看出
logging.format
是从SpringBoot yaml配置中传入logback的,是Access Log/运行日志/错误日志/Console日志的格式字段,带有traceId
和SpanId
字段;
@Configuration
public class LogConfigurer
{
/**
* 在tomcat日志中实现trace id
*
* 参考: https://www.appsloveworld.com/springboot/100/36/mdc-related-content-in-tomcat-access-logs
*
* @param env 运行环境变量
* @return 定制的AccessLog工厂
*/
@Bean
public WebServerFactoryCustomizer<ConfigurableTomcatWebServerFactory> accessLog(Environment env)
{
return factory ->
{
final AccessLogValve valve = new MdcAccessLogValve();
valve.setPattern(env.getProperty("server.tomcat.accesslog.pattern"));
//直接覆盖原生的日志对象
if (factory instanceof TomcatServletWebServerFactory)
{
TomcatServletWebServerFactory tsFactory = (TomcatServletWebServerFactory)factory;
tsFactory.setEngineValves(Lists.newArrayList(valve));
}
};
}
}
@Slf4j
public class MdcAccessLogValve extends AccessLogValve
{
@Override
public void log(CharArrayWriter message)
{
log.info(message.toString());
}
@Override
protected AccessLogElement createAccessLogElement(String name, char pattern)
{
if (pattern == CommonBootConst.TRACE_TAG)
{
return (buf, date, request, response, time) ->
{
//兼容没有sleuth时的场景
boolean existTrace = ClassUtils.isPresent(SLEUTH_TYPE, this.getClass().getClassLoader());
if (!existTrace)
{
buf.append(Const.MID_LINK);
return;
}
Object context = request.getRequest().getAttribute(TraceContext.class.getName());
if (!(context instanceof TraceContext))
{
return;
}
TraceContext traceContext = (TraceContext)context;
if (CommonBootConst.TRACE_ID.equalsIgnoreCase(name))
{
buf.append(traceContext.traceId());
}
else if (CommonBootConst.SPAN_ID.equalsIgnoreCase(name))
{
buf.append(traceContext.spanId());
}
};
}
return super.createAccessLogElement(name, pattern);
}
/**
* Sleuth存在的key
*/
private static final String SLEUTH_TYPE = "org.springframework.cloud.sleuth.TraceContext";
}
MdcAccessLogValve
设计时,兼容了使用sleuth
和不使用sleuth
2种情况。- AccessLog打印出来后,就会发现会多了一些健康检查日志,注意不要把心跳检查设置得过于频繁;
public class CommonThreadFactory implements ThreadFactory
{
@Override
public Thread newThread(Runnable r)
{
//获取主线程的链路信息
MDCAdapter mdc = MDC.getMDCAdapter();
Map<String, String> map = mdc.getCopyOfContextMap();
Thread t = new Thread(r, this.poolPrefix + "-thread-" + THREAD_ID.getAndIncrement())
{
@Override
public void run()
{
try
{
//把链路追踪设置到线程池中的线程
if (null != map)
{
MDC.getMDCAdapter().setContextMap(map);
}
super.run();
}
finally
{
//使用完毕后,清理缓存,避免内存溢出
MDC.clear();
}
}
};
return t;
}
}
@Configuration
@EnableAsync
public class ThreadPoolConfigurer implements AsyncConfigurer
{
@Override
public Executor getAsyncExecutor()
{
return CommonThreadPool.getExecutor("asyncPool", CORE_NUM, MAX_NUM);
}
}
Spring Authorization Server
,很不幸该项目2021年被下线了,考虑到该框架不可维护,需要替换成继任者Spring-Security-OAuth2-Authorization-Server
,但是二者的代码差异非常大;SpringBoot/SpringCloud
版本为2.5.x+
/3.1.x+
,可支持JDK是1.8的Spring-Security-OAuth2-Authorization-Server
最高版本只有0.2.3
,而0.2. 3
最多支持的SpringBoot/SpringCloud版本为2.5.x+
/3.0.x+
,而两个版本的SpringBoot/SpringCloud也存在兼容性问题,搞得人快崩溃了。bq-service-auth
的根pom 配置去降级SpringBoot/SpringCloud的版本,配置如下:
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0modelVersion>
<packaging>pompackaging>
<artifactId>bq-service-authartifactId>
<version>1.0.0version>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.biuqugroupId>
<artifactId>bq-parentartifactId>
<version>${bq.version}version>
<scope>importscope>
dependency>
<dependency>
<groupId>com.biuqugroupId>
<artifactId>bq-boot-baseartifactId>
<version>1.0.4version>
<exclusions>
<exclusion>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-autoconfigureartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-actuator-autoconfigureartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-sleuthartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-devtoolsartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-sleuth-zipkinartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-netflix-eureka-clientartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-sleuth-braveartifactId>
exclusion>
<exclusion>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-loadbalancerartifactId>
exclusion>
exclusions>
dependency>
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-sleuth-braveartifactId>
<version>${spring.cloud.security.version}version>
<exclusions>
<exclusion>
<groupId>io.zipkin.bravegroupId>
<artifactId>brave-instrumentation-mongodbartifactId>
exclusion>
<exclusion>
<groupId>io.zipkin.bravegroupId>
<artifactId>brave-instrumentation-kafka-clientsartifactId>
exclusion>
<exclusion>
<groupId>io.zipkin.bravegroupId>
<artifactId>brave-instrumentation-kafka-streamsartifactId>
exclusion>
exclusions>
dependency>
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-loadbalancerartifactId>
<version>${spring.cloud.security.version}version>
dependency>
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-starter-sleuthartifactId>
<version>${spring.cloud.security.version}version>
dependency>
<dependency>
<groupId>org.springframework.cloudgroupId>
<artifactId>spring-cloud-sleuth-zipkinartifactId>
<version>${spring.cloud.security.version}version>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starterartifactId>
<version>${spring.boot.security.version}version>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-autoconfigureartifactId>
<version>${spring.boot.security.version}version>
<exclusions>
<exclusion>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-bootartifactId>
exclusion>
exclusions>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-actuator-autoconfigureartifactId>
<version>${spring.boot.security.version}version>
<exclusions>
<exclusion>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-bootartifactId>
exclusion>
exclusions>
dependency>
dependencies>
dependencyManagement>
project>
23-06-14 09:36:35.122|0f7969b428e4bb69|0f7969b428e4bb69|0:0:0:0:0:0:0:1|0:0:0:0:0:0:0:1|HTTP/1.1|POST /auth/user/get HTTP/1.1|200|235B|1027ms|-|forward:-|refer:-|PostmanRuntime/7.31.3
23-06-14 20:24:24.734|9f70c1d26fa9e9aa|9f70c1d26fa9e9aa|0:0:0:0:0:0:0:1|0:0:0:0:0:0:0:1|HTTP/1.1|POST /auth/user/add HTTP/1.1|200|216B|237ms|-|forward:-|refer:-|PostmanRuntime/7.31.3
23-06-14 20:24:31.246|39705b997ca54c70|39705b997ca54c70|127.0.0.1|127.0.0.1|HTTP/1.1|POST /oauth/token?scope=read&grant_type=client_credentials HTTP/1.1|200|1659B|235ms|-|forward:-|refer:-|PostmanRuntime/7.31.3
23-06-14 20:38:04.965|714f135ca51d2b65|714f135ca51d2b65|127.0.0.1|127.0.0.1|HTTP/1.1|GET /oauth/jwk HTTP/1.1|200|425B|12ms|-|forward:-|refer:-|Apache-HttpClient/4.5.13 (Java/1.8.0_144)
23-06-14 20:38:59.282|bee49f5e7bdfe536|708815f91d8f5fdf|127.0.0.1|127.0.0.1|HTTP/1.1|POST /oauth/token?scope=read&grant_type=client_credentials HTTP/1.1|200|1659B|220ms|-|forward:127.0.0.1|refer:-|PostmanRuntime/7.31.3
bq-service-auth
其实是扩展源码最多的的一个开源代码,后续再单独讲述。
-Dreactor.netty.http.server.accessLogEnabled=true -Dproject.name=bq-gateway
;
<configuration debug="false">
<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="accessLog" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_PATH}/access.logfile>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${LOG_PATH}/%d{yy-MM-dd}/access-%d{yy-MM-dd}.logFileNamePattern>
<maxHistory>30maxHistory>
rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${AUDIT_LOG_PATTERN}pattern>
<charset>UTF-8charset>
encoder>
appender>
<appender name="asyncAccessLog" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="accessLog"/>
appender>
<appender name="asyncNettyLog" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="consoleAppender"/>
<appender-ref ref="defaultAppender"/>
appender>
<logger name="reactor.netty.http.server.AccessLog" level="INFO" additivity="false">
<appender-ref ref="asyncAccessLog"/>
logger>
<logger name="reactor.netty.http.server.HttpServer" level="DEBUG" additivity="false" includeLocation="true">
<appender-ref ref="asyncNettyLog"/>
logger>
configuration>
@Slf4j
@Configuration
public class NettyConfigurer
{
/**
* 配置自定义的AccessLog
*
* @return Netty定制工厂
*/
@Bean
public WebServerFactoryCustomizer<NettyReactiveWebServerFactory> nettyServerFactory()
{
return factory ->
{
//配置access log
factory.addServerCustomizers(httpServer -> httpServer.accessLog(true, x ->
{
List<String> params = Lists.newArrayList();
params.add(x.accessDateTime().format(DateTimeFormatter.ofPattern(TimeUtil.SIMPLE_TIME_FORMAT)));
String traceId = Const.MID_LINK;
if (null != x.responseHeader(CommonBootConst.TRACE_ID))
{
traceId = x.responseHeader(CommonBootConst.TRACE_ID).toString();
}
params.add(traceId);
String spanId = Const.MID_LINK;
if (null != x.responseHeader(CommonBootConst.SPAN_ID))
{
spanId = x.responseHeader(CommonBootConst.SPAN_ID).toString();
}
params.add(spanId);
params.add(x.method().toString());
params.add(x.protocol());
params.add(x.connectionInformation().remoteAddress().toString());
params.add(x.connectionInformation().hostAddress().toString());
params.add(x.status() + StringUtils.EMPTY);
params.add(x.uri().toString());
params.add(x.contentLength() + "B");
params.add(x.duration() + "ms");
String format = StringUtils.repeat("{}|", params.size());
return AccessLog.create(format, params.toArray());
}));
};
}
}
@Slf4j
@Component
@Aspect
public class NettyTraceLogAop extends BaseAop
{
@Before(BEFORE_PATTERN)
@Override
public void before(JoinPoint joinPoint)
{
super.before(joinPoint);
}
@Override
protected void doBefore(Method method, Object[] args)
{
Object webServerObj = args[0];
if (webServerObj instanceof ServerWebExchange)
{
ServerWebExchange exchange = (ServerWebExchange)webServerObj;
MDCAdapter mdc = MDC.getMDCAdapter();
Map<String, String> map = mdc.getCopyOfContextMap();
if (!MapUtils.isEmpty(map))
{
//获取并缓存链路信息
exchange.getAttributes().put(GatewayConst.TRACE_LOG_KEY, map);
HttpHeaders headers = exchange.getResponse().getHeaders();
//把链路信息缓存至exchange的response对象header
for (String traceKey : map.keySet())
{
String value = map.get(traceKey);
if (!headers.containsKey(traceKey))
{
headers.add(traceKey, value);
}
}
}
else
{
//从缓存中提取并设置给过滤器
Map<String, String> cachedMap = exchange.getAttribute(GatewayConst.TRACE_LOG_KEY);
if (!MapUtils.isEmpty(cachedMap))
{
mdc.setContextMap(cachedMap);
}
}
}
}
/**
* 拦截所有过滤器匹配表达式
*/
private static final String BEFORE_PATTERN = "(execution (* com.biuqu.boot.*.*.filter.*GatewayFilter.filter(..)))";
}
当前的策略是从MDC中获取然后放入全局的参数中去,也可以不放入Header头。
@Slf4j
@Component
public class RemovingGatewayFilter implements GlobalFilter, Ordered
{
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain)
{
//从缓存中提取并设置给过滤器
Map<String, String> cachedMap = exchange.getAttribute(GatewayConst.TRACE_LOG_KEY);
return chain.filter(exchange).doFinally(s ->
{
if (!MapUtils.isEmpty(cachedMap))
{
MDC.getMDCAdapter().setContextMap(cachedMap);
}
long start = System.currentTimeMillis();
Map<String, Object> attributes = exchange.getAttributes();
if (attributes.containsKey(GatewayConst.TRACE_LOG_KEY))
{
attributes.remove(GatewayConst.TRACE_LOG_KEY);
}
log.info("finally cost:{}ms", System.currentTimeMillis() - start);
MDC.getMDCAdapter().clear();
});
}
}
23-06-14 20:38:21.160|063e2bc1e5223c6d|063e2bc1e5223c6d|POST|HTTP/1.1|/127.0.0.1:63787|/127.0.0.1:9992|500|/oauth/token?scope=read&grant_type=client_credentials|54B|225ms|
23-06-14 20:38:59.003|bee49f5e7bdfe536|bee49f5e7bdfe536|POST|HTTP/1.1|/127.0.0.1:63787|/127.0.0.1:9992|200|/oauth/token?scope=read&grant_type=client_credentials|1647B|304ms|
23-06-14 20:39:41.736|77b9d62ebaafec48|77b9d62ebaafec48|POST|HTTP/1.1|/127.0.0.1:63787|/127.0.0.1:9992|200|/oauth/token?scope=read&grant_type=client_credentials|1673B|251ms|
23-06-14 20:40:55.359|b2d83c74c2911355|b2d83c74c2911355|POST|HTTP/1.1|/0:0:0:0:0:0:0:1:63867|/0:0:0:0:0:0:0:1:9992|200|/oauth/token?scope=read&grant_type=client_credentials|1673B|239ms|
23-06-14 20:41:10.096|0637881570dec847|0637881570dec847|POST|HTTP/1.1|/0:0:0:0:0:0:0:1:63867|/0:0:0:0:0:0:0:1:9992|500|/oauth/enc/token?scope=read&grant_type=client_credentials|53B|11ms|