log4j2异步详解及高并发下的优化

基础概述

对于log4j2的同步和异步的讲解,本人也是找了很多的资料,也阅读了官方的文档和源码。
对于两者的区别已经发送log执行流程可参考下面的文章,讲的挺全面的:
https://www.cnblogs.com/yeyang/p/7944906.html
其中对于AsyncAppender和AsyncLogger源码的解读可参考:
https://www.cnblogs.com/lewis09/p/10003462.html
https://www.cnblogs.com/lewis09/p/10004117.html
Disruptor详解请参考:
https://www.jianshu.com/p/bad7b4b44e48

配置优化


本文是对之前那篇log4j2异步将log发送到kafka(https://blog.csdn.net/qq_35754073/article/details/103386177)的补充和log4j2在高并发情况下的优化。场景:qps需要可以稳定在10000以上,可是测试到3000就发生qps下降的情况,通过对apm以及服务cpu,内存的分析确认是log4j严重影响了性能,所以有了后来大量寻找log4j2异步性能的一些问题,总结之后也是对项目log4j2做了一些优化,也是的确提升了性能。
1.给log增加tracking_id,对每次请求的log作标识。
2.Loggers的配置从大部分配在root改为配置在logger,因为陪在root发现会有一些不是项目中的log发送到了kafka,而logger就很精准。
3.增加Disruptor队列长度并配置队列堵塞丢弃策略从而增加高并发下的性能

添加tracking_id:

String tracingId = request.getHeader("tracing_id");
if (tracingId == null || tracingId.isEmpty()) {
     tracingId = UUID.randomUUID().toString();
}
MDC.put("tracking_id", tracingId);

代码中直接使用:

因为MDC是线程安全的,每个线程都有唯一的一个,所以我们直接使用就可以了

String tracingId = MDC.get(Constant.TRACKING_ID);

在log4j2输出log的时候默认输出tracing_id:

Configuration:
  Properties:
    Property:
      - name: log-path
        value: "logs"
      - name: charset
        value: "UTF-8"
      - name: log.pattern
        value: "%d{yyyy-MM-dd HH:mm:ss.SSS} -%5p ${PID:-} [%X{tracking_id}] [%15.15t] %-30.30C{1.} : %m%n"
  Appenders:
    Console:
      name: CONSOLE
      target: SYSTEM_OUT
      PatternLayout:
        pattern: ${log.pattern}
    RollingFile:
      - name: REQUEST_LOG
        fileName: ${log-path}/request.log
        filePattern: "${log-path}/historyLog/info-%d{yyyy-MM-dd}-%i.log.gz"
        Filters:
          ThresholdFilter:
            - level: error
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: warn
              onMatch: DENY
              onMismatch: NEUTRAL
            - level: info
              onMatch: ACCEPT
              onMismatch: DENY
        JsonLayout:
          - charset: ${charset}
            compact: ${compact}
            complete: ${complete}
            stacktraceAsString: ${stacktraceAsString}
            eventEol: ${eventEol}
            properties: true
            KeyValuePair:
              - key: tags
                value: REQUEST_LOG
        Policies:
          TimeBasedTriggeringPolicy:
            interval: 1
            modulate: true
        DefaultRolloverStrategy:
          max: 100
- name: REQUEST_LOG
#下面是输出PatternLayout
#        fileName: ${log-path}/request.log
#        filePattern: "${log-path}/historyLog/info-%d{yyyy-MM-dd}-%i.log.gz"
#        PatternLayout:
#          charset: ${charset}
#          pattern: ${log.pattern}
#        Filters:
#          ThresholdFilter:
#            - level: error
#              onMatch: DENY
#              onMismatch: NEUTRAL
#            - level: warn
#              onMatch: DENY
#              onMismatch: NEUTRAL
#            - level: debug
#              onMatch: ACCEPT
#              onMismatch: DENY
#        Policies:
#          TimeBasedTriggeringPolicy:
#            interval: 1
#            modulate: true
#        DefaultRolloverStrategy:
#          max: 100
  Loggers:
    AsyncRoot:
      level: debug
      #      add location in async
      includeLocation: true
      AppenderRef:
        - ref: CONSOLE
    AsyncLogger:
      - name: REQUEST_LOG
        AppenderRef:
          - ref: REQUEST_LOG

Loggers全部改为AsyncLogger:

  Loggers:
    AsyncRoot:
      level: debug
      #      add location in async
      includeLocation: true
      AppenderRef:
        - ref: CONSOLE
    AsyncLogger:
      - name: REQUEST_LOG
        AppenderRef:
          - ref: REQUEST_LOG
      - name: SERVICE_LOG
        AppenderRef:
          - ref: SERVICE_LOG
      - name: ERROR_LOG
        AppenderRef:
          - ref: ERROR_LOG

代码中使用:

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public interface NeuralyzerLog {
     Logger requestLog = LogManager.getLogger("REQUEST_LOG");

     Logger serviceLog = LogManager.getLogger("SERVICE_LOG");

     Logger errorLog = LogManager.getLogger("ERROR_LOG");
}

增加Disruptor队列长度并配置队列堵塞丢弃策略:

需要更改System Property(系统参数)

log4j2.asyncLoggerRingBufferSize:指定队列的长度

log4j2.AsyncQueueFullPolicy:指定堵塞丢弃策略,如果未指定此属性或具有value "Default",则此工厂创建DefaultAsyncQueueFullPolicy对象。如果此属性具有value "Discard",则此工厂将创建 DiscardingAsyncQueueFullPolicy对象。默认情况下,如果队列已满,此路由器将丢弃级别为INFO,DEBUG和TRACE的事件。可以使用属性log4j2.DiscardThreshold(开始丢弃的级别名称)进行调整。

更详细的参数请参考官网:

https://logging.apache.org/log4j/2.x/manual/async.html

指定System Property(系统参数)的方式:

1.启动参数方式

启动的时候增加-Dlog4j2.asyncLoggerRingBufferSize=123456789

2.log4j2提供的og4j2.component.properties file方式

classpath下创建og4j2.component.properties文件

log4j2.asyncLoggerRingBufferSize=123456789

log4j2.AsyncQueueFullPolicy=Discard
    
log4j2.AsyncQueueFullPolicy=Discard;log4j2.DiscardThreshold=INFO

如果两者都配置了,则第二种方式将会覆盖第一种方式的配置。

 

你可能感兴趣的:(java,log4j2,SpringBoot)