一次消息消费服务的内存泄漏排查小记

线上有一个消息消费服务xxx-consumer,使用spring-kafka框架,主线程批量从消费队列(kafka)拉取交易系统生产的消息,然后提交到子线程池中挨个处理消费。

public abstract class AbstractMessageDispatchListener implements
        BatchAcknowledgingMessageListener, ApplicationListener {
​
    private ThreadPoolExecutor executor;
​
    public abstract MessageWorker chooseWorker(ConsumerRecord data);
​
    @Override
    public void onMessage(List> datas, Acknowledgment acknowledgment) {
        List> futureList = new ArrayList<>(datas.size());
        try {
            CountDownLatch countDownLatch = new CountDownLatch(datas.size());
            for (ConsumerRecord data : datas) {
                Future future = executor.submit(new Worker(data, countDownLatch));
                futureList.add(future);
            }
​
            countDownLatch.await(20000L - 2000, TimeUnit.MILLISECONDS);
            long countDownLatchCount = countDownLatch.getCount();
            if (countDownLatchCount > 0) {
                return;
            }
            acknowledgment.acknowledge();
        } catch (Exception e) {
            logger.error("onMessage error ", e);
        } finally {
            for (Future future : futureList) {
                if (future.isDone() || future.isCancelled()) {
                    continue;
                }
                future.cancel(true);
            }
        }
    }
​
    @Override
    public void onApplicationEvent(ApplicationReadyEvent event) {
        ThreadFactoryBuilder builder = new ThreadFactoryBuilder();
        builder.setNameFormat(this.getClass().getSimpleName() + "-pool-%d");
        builder.setDaemon(false);
        executor = new ThreadPoolExecutor(12,
                12 * 2,
                60L,
                TimeUnit.SECONDS,
                new ArrayBlockingQueue<>(100),
                builder.build());
    }
​
    private class Worker implements Runnable {
        private ConsumerRecord data;
        private CountDownLatch countDownLatch;
​
        Worker(ConsumerRecord data, CountDownLatch countDownLatch) {
            this.data = data;
            this.countDownLatch = countDownLatch;
        }
​
        @Override
        public void run() {
            try {
                MessageWorker worker = chooseWorker(data);
                worker.work(data.value());
            } finally {
                countDownLatch.countDown();
            }
        }
    }
}

1. 问题背景

有一天早上xxx-consumer服务出现大量报警,人工排查发现30w+的消息未处理,业务日志正常,gc日志有大量Full gc,初步判断因为Full gc导致消息处理慢,大量的消息积压。

image.png

2. 堆栈分析

查看了近一个月的JVM内存信息,发现老年代内存无法被回收(9月22号的下降是因为服务有一次上线重启),初步判断发生了内存泄漏。

image.png

通过命令导出内存快照,使用Memory Analyzer解析内存快照文件jmap_dump.hprof,发现有很明显的内存泄漏提示:

image.png

进一步查看线程细节,发现创建了大量的ThreadLocalScope对象且循环引用:

image.png

同时我们也看到了分布式追踪(dd-trace-java)jar包中的FakeSpan类,初步判断是dd-trace-java中自研扩展的kafka插件存在内存泄漏bug。

3. 代码分析

继续查看dd-trace-java中kafka插件的代码,其处理流程如下:

第一批消息

  1. (SpringKafkaConsumerInstrumentation:L22)BatchAcknowledgingMessageListener.onMessage进入时,主线程会创建一个scope00=ThreadLocalScope(Type_BatchMessageListener_Value,toRestore=null)
  2. (ExecutorInstrumentation:L21L47)消息被submit到线程池中处理时,子线程会创建一个scope10=ThreadLocalScope(Type_BatchMessageListener_Value,toRestore=null)
  3. (SpringKafkaConsumerInstrumentation:L68)子线程处理消息时(ConsumerRecord.value),会创建一个scope11=ThreadLocalScope(Type_ConsumberRecord_Value,toRestore=scope10)
  4. (ExecutorInstrumentation:L54)子线程处理完消息后,执行scope10.close(),而scopeManager.tlsScope.get()=scope11,命中ThreadLocalScope:L19,scope10和scope11均无法被GC
  5. (SpringKafkaConsumerInstrumentation:L42)BatchAcknowledgingMessageListener.onMessage退出时,主线程会执行scope00.close(),scope00会被GC

 第二批消息

  1. (SpringKafkaConsumerInstrumentation:L22)BatchAcknowledgingMessageListener.onMessage进入时,主线程会创建一个scope01=ThreadLocalScope(Type_BatchMessageListener_Value,toRestore=null)
  2. (ExecutorInstrumentation:L21L47)消息被submit到线程池中处理时,子线程会创建一个scope12=ThreadLocalScope(Type_BatchMessageListener_Value,toRestore=scope11)
  3. (SpringKafkaConsumerInstrumentation:L68)子线程处理消息时(ConsumerRecord.value),会创建一个scope13=ThreadLocalScope(Type_ConsumberRecord_Value,toRestore=scope12)
  4. (ExecutorInstrumentation:L54)子线程处理完消息后,执行scope12.close(),而scopeManager.tlsScope.get()=scope13,命中ThreadLocalScope:L19,scope12和scope13均无法被GC
  5. (SpringKafkaConsumerInstrumentation:L42)BatchAcknowledgingMessageListener.onMessage退出时,主线程会执行scope01.close(),scope01会被GC

 从上可以看到,主线程创建的ThreadLocalScope能被正确GC,而线程池中创建的ThreadLocalScope被循环引用,无法被正确GC,从而造成内存泄漏。

@AutoService(Instrumenter.class)
public final class SpringKafkaConsumerInstrumentation extends Instrumenter.Configurable {
 
    @Override
    public AgentBuilder apply(final AgentBuilder agentBuilder) {
        return agentBuilder
                .type(hasSuperType(named("org.springframework.kafka.listener.BatchAcknowledgingMessageListener")))
                .transform(DDAdvice.create().advice(isMethod().and(isPublic()).and(named("onMessage")),
                        BatchMessageListenerAdvice.class.getName()))
                .type(named("org.apache.kafka.clients.consumer.ConsumerRecord"))
                .transform(DDAdvice.create().advice(isMethod().and(isPublic()).and(named("value")),
                        RecoredValueAdvice.class.getName()))
                .asDecorator();
    }
 
    public static class BatchMessageListenerAdvice {
 
        @Advice.OnMethodEnter(suppress = Throwable.class)
        public static Scope before() {
            FakeSpan span = new FakeSpan();
            span.setKind(FakeSpan.Type_BatchMessageListener_Value);
            Scope scope = GlobalTracer.get().scopeManager().activate(span, false);
            return scope;
        }
 
        @Advice.OnMethodExit(suppress = Throwable.class)
        public static void after(@Advice.Enter Scope scope) {
            while (true) {
                Span span = GlobalTracer.get().activeSpan();
                if (span != null && span instanceof FakeSpan) {
                    FakeSpan fakeSpan = (FakeSpan) span;
                    if (fakeSpan.getKind().equals(FakeSpan.Type_ConsumberRecord_Value)) {
                        GlobalTracer.get().scopeManager().active().close();
                    } else {
                        break;
                    }
                } else {
                    break;
                }
            }
            if (scope != null) {
                scope.close();
            }
        }
    }
 
    public static class RecoredValueAdvice {
 
        @Advice.OnMethodEnter(suppress = Throwable.class)
        public static void before(@Advice.This ConsumerRecord record) {
            Span activeSpan = GlobalTracer.get().activeSpan();
            if (activeSpan instanceof FakeSpan) {
                FakeSpan proxy = (FakeSpan) activeSpan;
                if (proxy.getKind().equals(FakeSpan.Type_ConsumberRecord_Value)) {
                    GlobalTracer.get().scopeManager().active().close();
                    activeSpan = GlobalTracer.get().activeSpan();
                    if (activeSpan instanceof FakeSpan) {
                        proxy = (FakeSpan) activeSpan;
                    }
                }
 
                if (proxy.getKind().equals(FakeSpan.Type_BatchMessageListener_Value)) {
                    final SpanContext spanContext = TracingKafkaUtils.extractSecond(record.headers(), GlobalTracer.get());
                    if (spanContext != null) {
                        FakeSpan consumerProxy = new FakeSpan();
                        consumerProxy.setContext(spanContext);
                        consumerProxy.setKind(FakeSpan.Type_ConsumberRecord_Value);
                        GlobalTracer.get().scopeManager().activate(consumerProxy, false);
                    }
                }
            }
        }
    }
}
@AutoService(Instrumenter.class)
public final class ExecutorInstrumentation extends Instrumenter.Configurable {
 
    @Override
    public AgentBuilder apply(final AgentBuilder agentBuilder) {
        return agentBuilder
                .type(not(isInterface()).and(hasSuperType(named(ExecutorService.class.getName()))))
                .transform(DDAdvice.create().advice(named("submit").and(takesArgument(0, Runnable.class)),
                        SubmitTracedRunnableAdvice.class.getName()))
                .asDecorator();
    }
 
 
    public static class SubmitTracedRunnableAdvice {
 
        @Advice.OnMethodEnter(suppress = Throwable.class)
        public static TracedRunnable wrapJob(
                @Advice.This Object dis,
                @Advice.Argument(value = 0, readOnly = false) Runnable task) {
            if (task != null && (!dis.getClass().getName().startsWith("slick.util.AsyncExecutor"))) {
                task = new TracedRunnable(task, GlobalTracer.get());
                return (TracedRunnable) task;
            }
            return null;
        }
    }
 
    public static class TracedRunnable implements Runnable {
        private final Runnable delegate;
        private final Span span;
        private final Tracer tracer;
 
        public TracedRunnable(Runnable delegate, Tracer tracer) {
            this.delegate = delegate;
            this.tracer = tracer;
            if (tracer != null) {
                this.span = tracer.activeSpan();
            } else {
                this.span = null;
            }
        }
 
        @Override
        public void run() {
            Scope scope = null;
            if (span != null && tracer != null) {
                scope = tracer.scopeManager().activate(span, false);
            }
 
            try {
                delegate.run();
            } finally {
                if (scope != null) {
                    scope.close();
                }
            }
        }
    }
}
public class ThreadLocalScopeManager implements ScopeManager {
 
    final ThreadLocal tlsScope = new ThreadLocal();
 
    @Override
    public Scope activate(Span span, boolean finishOnClose) {
        return new ThreadLocalScope(this, span, finishOnClose);
    }
 
    @Override
    public Scope active() {
        return tlsScope.get();
    }
}
public class ThreadLocalScope implements Scope {
    private final ThreadLocalScopeManager scopeManager;
    private final Span wrapped;
    private final boolean finishOnClose;
    private final ThreadLocalScope toRestore;
 
    ThreadLocalScope(ThreadLocalScopeManager scopeManager, Span wrapped, boolean finishOnClose) {
        this.scopeManager = scopeManager;
        this.wrapped = wrapped;
        this.finishOnClose = finishOnClose;
        this.toRestore = scopeManager.tlsScope.get();
        scopeManager.tlsScope.set(this);
    }
 
    @Override
    public void close() {
        if (scopeManager.tlsScope.get() != this) {
            // This shouldn't happen if users call methods in the expected order. Bail out.
            return;
        }
 
        if (finishOnClose) {
            wrapped.finish();
        }
 
        scopeManager.tlsScope.set(toRestore);
    }
 
    @Override
    public Span span() {
        return wrapped;
    }
}

End

RecoredValueAdvice没有销毁自己创建的对象,而是寄希望于BatchMessageListenerAdvice去销毁。

但(SpringKafkaConsumerInstrumentation:L27)BatchAcknowledgingMessageListener.onMessage退出时,只会close主线程创建的ThreadLocalScope,不会close线程池中创建的ThreadLocalScope,导致子线程创建的ThreadLocalScope被循环引用,无法被正确GC,从而造成内存泄漏。

你可能感兴趣的:(java内存泄漏后端)