应用容器化部署已经成为一个趋势,依托容器云自动调度平台(如k8s)能够快速实现应用的扩容和发布,本文简要介绍了在Kubernetes平台上,SpringBoot应用日志的一种解决方案。方案依托平台优势,优雅、简介、快速的实现应用日志的采集和分析。同时,对生产环境下日志的输出,详细介绍了生产环境下采用JSON格式输出日志配置全过程。
本设计方案是在Kubernetes环境下,通过集成日志工具Loki+Promtail,使得容器云环境能够自动化采集集群内各Pod日志。Grafana作为可视化终端,通过链接Loki数据源,能够对采集的日志进行搜索和分析。其中:
应用通过输出日志到控制台,Promtail实时采集应用输出到控制台的日志,并发送至Loki。
这种方案
与ELK相比,大大减少了硬件资源的使用。适合中小集群监控。
日志搜索可以通过k8s中资源label标签进行筛选。
**Logstash Logback Encoder **开源项目提供了Logback JSON encoder 和 appenders,这个类库最新详细用法参考项目文档介绍
Format | Protocol | Function | LoggingEvent | AccessEvent |
---|---|---|---|---|
Logstash JSON | Syslog/UDP | Appender | LogstashUdpSocketAppender |
LogstashAccessUdpSocketAppender |
Logstash JSON | TCP | Appender | LogstashTcpSocketAppender |
LogstashAccessTcpSocketAppender |
any | any | Appender | LoggingEventAsyncDisruptorAppender |
AccessEventAsyncDisruptorAppender |
Logstash JSON | any | Encoder | LogstashEncoder |
LogstashAccessEncoder |
Logstash JSON | any | Layout | LogstashLayout |
LogstashAccessLayout |
General JSON | any | Encoder | LoggingEventCompositeJsonEncoder |
AccessEventCompositeJsonEncoder |
General JSON | any | Layout | LoggingEventCompositeJsonLayout |
AccessEventCompositeJsonLayout |
根据文档说明,我们使用 LoggingEventCompositeJsonEncoder 来自定义Json Encoder。下面开始实战配置。
来源文档 https://github.com/logfellow/logstash-logback-encoder#including-it-in-your-project
<dependency>
<groupId>net.logstash.logbackgroupId>
<artifactId>logstash-logback-encoderartifactId>
<version>7.2version>
dependency>
<dependency>
<groupId>ch.qos.logbackgroupId>
<artifactId>logback-classicartifactId>
dependency>
<dependency>
<groupId>ch.qos.logbackgroupId>
<artifactId>logback-accessartifactId>
dependency>
<dependency>
<groupId>ch.qos.logbackgroupId>
<artifactId>logback-coreartifactId>
dependency>
<dependency>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-apiartifactId>
dependency>
在资源文件夹中创建logback-spring.xml
文件,默认情况下将所有日志从控制台输出。
<configuration scan="true" scanPeriod="5 seconds">
<springProperty scope="context" name="appName" source="spring.application.name" defaultValue="unknown" />
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
...
encoder>
appender>
<root level="INFO">
<appender-ref ref="STDOUT"/>
root>
configuration>
配置说明:https://github.com/logfellow/logstash-logback-encoder#composite-encoderlayout
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC+8timeZone>
timestamp>
<pattern>
<omitEmptyFields>trueomitEmptyFields>
<pattern>
{
"timestamp": "%date{ISO8601}",
"service": "${appName}",
"level": "%level",
"pid": "${PID:-}",
"thread": "%thread",
"class": "%logger{60}",
"method": "%method",
"line": "%line",
"message": "#tryJson{%message}"
}
pattern>
pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>100maxDepthPerThrowable>
<maxLength>20480maxLength>
<rootCauseFirst>truerootCauseFirst>
throwableConverter>
stackTrace>
providers>
encoder>
示例说明:
配置完成后,从控制台打印的日志
除了默认的日志输出内容外,在web应用场景下,我们希望将用户请求时来源IP和请求编号记录到日志中。
Mapped Diagnostic Context (MDC)
是Slf4j提供的一个API,主要功能就是在多线程环境下进行日志调用链路跟踪,使用起来也简单。
在Spring中定义拦截器的过程较为简单
import lombok.extern.slf4j.Slf4j;
import org.slf4j.MDC;
import org.springframework.stereotype.Component;
import org.springframework.web.servlet.HandlerInterceptor;
import org.springframework.web.servlet.ModelAndView;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.util.UUID;
@Slf4j
@Component
public class LogInterceptor implements HandlerInterceptor {
private final static String REQUEST_ID = "requestId";
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
String xForwardedForHeader = request.getHeader("X-Forwarded-For");
String remoteIp = request.getRemoteAddr();
String uuid = UUID.randomUUID().toString();
log.info("put requestId ({}) to logger", uuid);
log.info("request id:{}, client ip:{}, X-Forwarded-For:{}", uuid, remoteIp, xForwardedForHeader);
MDC.put(REQUEST_ID, uuid);
MDC.put("remoteIp", remoteIp);
return true;
}
@Override
public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
String uuid = MDC.get(REQUEST_ID);
log.info("remove requestId ({}) from logger", uuid);
MDC.remove(REQUEST_ID);
HandlerInterceptor.super.postHandle(request, response, handler, modelAndView);
}
@Override
public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
HandlerInterceptor.super.afterCompletion(request, response, handler, ex);
}
}
在preHandle
方法中,获取remoteIp并放入MDC中,同时初始化了请求ID,这里使用的是uuid。
SpringMvc注册拦截器,不多解释,主要代码如下:
@Configuration
@RequiredArgsConstructor
public class WebMvcConfig implements WebMvcConfigurer {
private final LogInterceptor logInterceptor;
@Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(logInterceptor);
}
}
修改logback-spring.xml
配置输出模版, 通过添加输出项"requestId": "%mdc{requestId}"
和 "remoteIP": "%mdc{remoteIp}"
到模版
{
"timestamp": "%date{ISO8601}",
...
"requestId": "%mdc{requestId}",
"remoteIP": "%mdc{remoteIp}",
...
"message": "#tryJson{%message}"
}
重新部署应用,观察日志输出:
可以看到,日志输出中,已经包含了我们在mdc中自定义的属性。
使用多环境配置,在当前解决方案下,主要用来实现,生产环境日志输出JSON格式,开发环境日志输出采用默认的行日志,多环境配置比较简单,通过定义springProfile
标签,name属性为环境名称,配置如下:
在分布式环境下,应用之间的调用链路信息,我们希望也集成到JSON日志输出中,例如是使用Spring-Cloud-Sleuth
,需要新增额外的链路信息到模版中,需要注意的是在Sleuth 3.0
中,属性名称已经发生了一些变化。参考文档: https://github.com/spring-cloud/spring-cloud-sleuth/wiki/Spring-Cloud-Sleuth-3.0-Migration-Guide#x-b3–mdc-fields-names-are-no-longer-set
上图是Grafana集成Loki后日志查询的搜索页面,支持JSON格式化输出。
LogQL是Grafana Loki的promql启发的查询语言。https://grafana.com/docs/loki/latest/logql/
它提供了2种查询能力:
本方案 logback-spring.xml
<configuration scan="true" scanPeriod="5 seconds">
<springProperty scope="context" name="appName" source="spring.application.name" defaultValue="unknown"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} %green(%-5level) %blue(%property{PID}) --- [%thread] %cyan(%-50logger{50}) : %msg%npattern>
encoder>
appender>
<appender name="PROD-STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC+8timeZone>
timestamp>
<pattern>
<omitEmptyFields>trueomitEmptyFields>
<pattern>
{
"timestamp": "%date{ISO8601}",
"requestId": "%mdc{requestId}",
"remoteIP": "%mdc{remoteIp}",
"service": "${appName}",
"level": "%level",
"pid": "${PID:-}",
"trace": "%X{X-B3-TraceId:-}",
"span": "%X{X-B3-SpanId:-}",
"parent": "%X{X-B3-ParentSpanId:-}",
"thread": "%thread",
"class": "%logger{60}",
"method": "%method",
"line": "%line",
"message": "#tryJson{%message}"
}
pattern>
pattern>
<stackTrace>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>100maxDepthPerThrowable>
<maxLength>20480maxLength>
<rootCauseFirst>truerootCauseFirst>
throwableConverter>
stackTrace>
providers>
encoder>
appender>
<springProfile name="default">
<root level="INFO">
<appender-ref ref="STDOUT"/>
root>
springProfile>
<springProfile name="kubernetes">
<root level="INFO">
<appender-ref ref="PROD-STDOUT"/>
root>
springProfile>
configuration>