高并发调优案例之logback

书接上回,在优化完了httpClient线程池之后,我们继续增加吞吐量对该接口进行施压,看看是否仍有优化的空间。同样还是获取ServiceB的堆栈日志并丢到gceasy(fastThread)中分析,有了一个惊人的发现,竟然有一个线程阻塞了其他61个线程!

线程阻塞关系

点开罪魁祸首http-nio-8080-exec-86线程的堆栈信息,看看它到底干了什么伤天害理之事搞得其他61个同僚只能搁那儿干瞪眼:

http-nio-8080-exec-86
Stack Trace is:
java.lang.Thread.State: RUNNABLE
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
- locked <0x00000006c0bbef40> (a java.io.BufferedOutputStream)
at ch.qos.logback.core.recovery.ResilientOutputStreamBase.flush(ResilientOutputStreamBase.java:79)
at ch.qos.logback.core.encoder.LayoutWrappingEncoder.doEncode(LayoutWrappingEncoder.java:133)
at ch.qos.logback.core.OutputStreamAppender.writeOut(OutputStreamAppender.java:187)
at ch.qos.logback.core.FileAppender.writeOut(FileAppender.java:279)
at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:212)
at ch.qos.logback.core.rolling.RollingFileAppender.subAppend(RollingFileAppender.java:235)
at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:100)
at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421)
at ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:414)
at ch.qos.logback.classic.Logger.info(Logger.java:587)
at com.sawyer.mall.service.impl.CommodityServiceImp.queryStock(CommodityServiceImp.java:225)
...
中略
...
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- <0x00000006c0bc1db8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
- <0x00000006c6564120> (a java.util.concurrent.ThreadPoolExecutor$Worker)

通过分析堆栈信息可以看出,这个线程正在使用logback打印日志,对照代码里也只是很简单的一句:

log.info("业务日志");

在堆栈信息的最后,记录了该线程持有的同步锁信息Locked ownable synchronizers,可以看到该线程持有了一个很可疑的可重入锁,锁定对象为<0x00000006c0bc1db8>,那其他线程一定是在等待这个对象的释放,我们点开被阻塞的线程之一的http-nio-8080-exec-89

http-nio-8080-exec-89
Stack Trace is:
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c0bc1db8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:210)
at ch.qos.logback.core.rolling.RollingFileAppender.subAppend(RollingFileAppender.java:235)
at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:100)
at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421)
at ch.qos.logback.classic.Logger.filterAndLog_1(Logger.java:398)
at ch.qos.logback.classic.Logger.info(Logger.java:583)
at com.sawyer.mall.commodity.impl.biz.CommodityBiz.queryByIdFromCache(CommodityBiz.java:253)
下略
...

发现里面有一句parking to wait for <0x00000006c0bc1db8>,表示该线程的确是因为等待<0x00000006c0bc1db8>对象的释放而被阻塞。那么为什么线程http-nio-8080-exec-86在写日志的时候会持有一个可重入锁呢?难道logback写入日志是同步写入的吗?为了探究这个问题的答案,我们查看logback的源码,发现常规的appender(比如我们代码里使用的RollingFileAppender)最终都会通过OutputStreamAppender类的append写入日志:

    /**
     * All synchronization in this class is done via the lock object.
     */
    protected final ReentrantLock lock = new ReentrantLock(false);

    /**
     * 中间代码省略
     */

     /**
     * Actual writing occurs here.
     * 

* Most subclasses of WriterAppender will need to override this * method. * * @since 0.9.0 */ protected void subAppend(E event) { if (!isStarted()) { return; } try { // this step avoids LBCLASSIC-139 if (event instanceof DeferredProcessingAware) { ((DeferredProcessingAware) event).prepareForDeferredProcessing(); } // the synchronization prevents the OutputStream from being closed while we // are writing. It also prevents multiple threads from entering the same // converter. Converters assume that they are in a synchronized block. lock.lock(); try { writeOut(event); } finally { lock.unlock(); } } catch (IOException ioe) { // as soon as an exception occurs, move to non-started state // and add a single ErrorStatus to the SM. this.started = false; addStatus(new ErrorStatus("IO failure in appender", this, ioe)); } }

而这个方法里,在真正写入日志writeOut(event)前,使用了lock.lock()进入了同步。这里大家可以思考下为什么这里要加锁。

其实答案很简单,就是为了防止多线程的情况下,使用同一个appender的线程互相影响,比如:

  1. 线程1写了一半,线程2就把OutputStream关闭了;
  2. 多个线程同时写入导致日志内容错乱。

那怎么才能优化多线程高并发的场景,使logback不至于成为瓶颈呢?毕竟它毕竟只是个写日志的工具,如果因为它而限制了吞吐量,实在是捡了芝麻丢了西瓜。

通过查看源码和官方文档发现,除了常规的UnsynchronizedAppender外,还有一个异步的AsyncAppender,官方文档对它的介绍如下:

AsyncAppender buffers events in a BlockingQueue. A worker thread created by AsyncAppender takes events from the head of the queue, and dispatches them to the single appender attached to AsyncAppender. Note that by default, AsyncAppender will drop events of level TRACE, DEBUG and INFO if its queue is 80% full. This strategy has an amazingly favorable effect on performance at the cost of event loss.

简而言之,就是这个AsyncAppender创建了一个阻塞队列,当有日志需要写入的时候,先放到这个队列中,并通过一个工作线程不停地将队列中的日志写入事件取出来执行,其中很重要的一点是,这个队列本身是有长度限制的,一旦到达了80%的水位线,则TRACE, DEBUGINFO级别的日志会被抛弃,最后官方还评价了一句,这个策略在损失一部分日志的代价下,会有显著的性能提升。

闲话少叙我们还是来看关键源码:

    @Override
    public void start() {
        if (isStarted())
            return;
        if (appenderCount == 0) {
            addError("No attached appenders found.");
            return;
        }
        if (queueSize < 1) {
            addError("Invalid queue size [" + queueSize + "]");
            return;
        }
        blockingQueue = new ArrayBlockingQueue(queueSize);

        if (discardingThreshold == UNDEFINED)
            discardingThreshold = queueSize / 5;
        addInfo("Setting discardingThreshold to " + discardingThreshold);
        worker.setDaemon(true);
        worker.setName("AsyncAppender-Worker-" + getName());
        // make sure this instance is marked as "started" before staring the worker Thread
        super.start();
        worker.start();
    }

    @Override
    protected void append(E eventObject) {
        //当超过水位线且符合TRACE/DEBUG/INFO时,直接return,不打日志
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) {
            return;
        }
        preprocess(eventObject);
        put(eventObject);
    }

    private boolean isQueueBelowDiscardingThreshold() {
        return (blockingQueue.remainingCapacity() < discardingThreshold);
    }

    private void put(E eventObject) {
        if (neverBlock) {
            blockingQueue.offer(eventObject);
        } else {
            try {
                blockingQueue.put(eventObject);
            } catch (InterruptedException e) {
                // Interruption of current thread when in doAppend method should not be consumed
                // by AsyncAppender
                Thread.currentThread().interrupt();
            }
        }
    }

work线程如下:

    class Worker extends Thread {

        public void run() {
            AsyncAppenderBase parent = AsyncAppenderBase.this;
            AppenderAttachableImpl aai = parent.aai;

            // loop while the parent is started
            while (parent.isStarted()) {
                try {
                    E e = parent.blockingQueue.take();
                    aai.appendLoopOnAppenders(e);
                } catch (InterruptedException ie) {
                    break;
                }
            }

            addInfo("Worker thread will flush remaining events before exiting. ");

            for (E e : parent.blockingQueue) {
                aai.appendLoopOnAppenders(e);
                parent.blockingQueue.remove(e);
            }

            aai.detachAndStopAllAppenders();
        }
    }

其实说白了,这就是跟我们用MQ同步转异步一个道理,把本来同步写入的日志先放到队列里,然后一个个消费。

要配置也很简单,在logback.xml中,将需要异步写入的appender和AsyncAppender关联即可:

    
        0
        500
        true
        
    

    
        
        
        ...
    

这里有三个重要的参数:

  • discardingThreshold用来指定抛弃的水位线,默认80%;
  • queueSize用来指定队列大小,默认256;
  • neverBlock 用来指定当队列已满时放入新的日志事件是否抛出异常,查看上面put方法即可发现,如果为true则使用offer,否则使用put,区别为前者非阻塞后者阻塞。

你可能感兴趣的:(高并发调优案例之logback)