进程丢失问题通常有两种原因:
1、系统物理内存耗尽(观察当时的内存使用情况、IO、CPU,并未发现异常)
2、应用运行的Java虚拟机进程异常退出(Crash)导致
说明:OOM Killer机制是如何知道要先结束哪个进程?其实Linux的每个进程都有一个oom_score值(位于 /proc/XXX/oom_score
),该值越大,越有可能会被OOM Killer机制选中并结束。
当一个进程被OOM Killer结束后,会向/var/log/messages
操作系统日志写入结束进程的PID等信息,所以判断进程是否被OOM Killer,可以通过搜索操作系统日志来得知,执行以下任意一条命令,可以查看进程被结束的信息。
grep -i 'killed process' /var/log/messages
egrep "oom-killer|total-vm" /var/log/messages
ECS实例进程被OOM Killer的具体日志如下。
[Wed Aug 31 16:36:42 2017] Out of memory: Kill process 43805 (keystone-all) score 249 or sacrifice child [Wed Aug 31 16:36:42 2017] Killed process 43805 (keystone-all) total-vm:4446352kB, anon-rss:4053140kB, file-rss:68kB [Wed Aug 31 16:56:25 2017] keystone-all invoked oom-killer: gfp_mask=0x280da, order=0, oom_score_adj=0 [Wed Aug 31 16:56:25 2017] keystone-all cpuset=/ mems_allowed=0 [Wed Aug 31 16:56:25 2017] CPU: 2 PID: 88196 Comm: keystone-all Not tainted 3.10.0-327.13.1.el7.x86_64 #1
JVM进程通常会因为异常的JNI调用等原因在运行时异常退出,发生此问题时,会在当前JVM进程的工作目录(通过pwdx [$JVM_PID]
命令可查)生成一个hs_err_[$JVM_PID].log
文件。通常,从这个日志文件中即可查到JVM进程异常退出时执行的线程或原因(必要时可以通过生成coredump文件进一步分析)。
既然就这两种情况,开始排查我们服务器的日志 grep -i 'killed process' /var/log/messages 未发现相关信息,这就排除了第一种情况oom_killer;接着进入rocketMq的工作目录发现了确实生成了hs_err_1234.log,日志内容如下:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f36e85ddaf6, pid=8489, tid=0x00007f352e3df700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_201-b09) (build 1.8.0_201-b09)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.201-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# v ~StubRoutines::jlong_disjoint_arraycopy
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread (0x00007f36a0000cb0): JavaThread "SendMessageThread_1" [_thread_in_Java, id=8603, stack(0x00007f352e2df000,0x00007f352e3e0000)]
siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007f34f7d67000
Registers:
RAX=0x00000005c01de018, RBX=0x0000000000000000, RCX=0x00007f34f7d670e0, RDX=0xffffffffffffffeb
RSP=0x00007f352e3de4f0, RBP=0x00007f352e3de4f0, RSI=0x00007f34f7d67000, RDI=0x00000005c01de010
R8 =0x0000000000000000, R9 =0x00007f34f7d67000, R10=0x00007f36e85de760, R11=0x00000005c01ddf20
R12=0x0000000000000000, R13=0x00000000000000e8, R14=0x000000077b8467c0, R15=0x00007f36a0000cb0
RIP=0x00007f36e85ddaf6, EFLAGS=0x0000000000010286, CSGSFS=0x0000000000000033, ERR=0x0000000000000006
TRAPNO=0x000000000000000e
提取关键信息:
1、SIGBUS (0x7)
2、Problematic frame:# v ~StubRoutines::jlong_disjoint_arraycopy
3、ulimit -c unlimited (备注此处通过这个可以生成 core dump文件,具体参考镜像FAQ - 云服务器 ECS - 阿里云)
4、JavaThread "SendMessageThread_1"
根据这些信息通过搜索引擎查找答案,查找到了相关文章说是JDK的bug(备注:jdk的bug概率极低,后来看也不是jdk的问题),开始升级jdk从1.8升级到Java10以上,又遇到Java9以后相关jvm参数的变更,启动mq报错,具体参考,修改runbroker 和 runnamesrv脚本jdk9 jdk10 jdk11启动rocketMQ的问题_Dissolute_的博客-CSDN博客
最终MQ成功启动,但还时不时出问题,有一个新发现就是Java10以后hs_err_pid.log更详细了
内容如下:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007ff481c70304, pid=24249, tid=24426
#
# JRE version: Java(TM) SE Runtime Environment (10.0.2+13) (build 10.0.2+13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (10.0.2+13, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# v ~StubRoutines::jbyte_disjoint_arraycopy
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
--------------- S U M M A R Y ------------
Command Line: -Xms8192m -Xmx8192m -Xmn4096m -XX:+UseG1GC -XX:G1HeapRegionSize=16m -XX:G1ReservePercent=25 -XX:InitiatingHeapOccupancyPercent=30 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:SurvivorRatio=8 -XX:+DisableExplicitGC -verbose:gc -Xlog:gc:/dev/shm/mq_gc_%p.log -XX:-OmitStackTraceInFastThrow -XX:+AlwaysPreTouch -XX:MaxDirectMemorySize=8g -XX:-UseLargePages -XX:-UseBiasedLocking --add-exports=java.base/jdk.internal.ref=ALL-UNNAMED org.apache.rocketmq.broker.BrokerStartup -c ../conf/2m-2s-async/broker-a.properties
Host: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, 8 cores, 31G, CentOS Linux release 7.4.1708 (Core)
Time: Fri Nov 19 10:00:37 2021 CST elapsed time: 2224 seconds (0d 0h 37m 4s)
--------------- T H R E A D ---------------
Current thread (0x00007ff2c0010770): JavaThread "SendMessageThread_1" [_thread_in_Java, id=24426, stack(0x00007ff2931c1000,0x00007ff2932c2000)]
Stack: [0x00007ff2931c1000,0x00007ff2932c2000], sp=0x00007ff2932c04f0, free space=1021k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
v ~StubRoutines::jbyte_disjoint_arraycopy
J 3585 c2 java.nio.DirectByteBuffer.put([BII)Ljava/nio/ByteBuffer; [email protected] (135 bytes) @ 0x00007ff489909ab6 [0x00007ff4899099c0+0x00000000000000f6]
J 4655 c2 org.apache.rocketmq.store.CommitLog$DefaultAppendMessageCallback.doAppend(JLjava/nio/ByteBuffer;ILorg/apache/rocketmq/store/MessageExtBrokerInner;)Lorg/apache/rocketmq/store/AppendMessageResult; (939 bytes) @ 0x00007ff489bc4f24 [0x00007ff489bc2bc0+0x0000000000002364]
J 4593 c2 org.apache.rocketmq.store.MappedFile.appendMessage(Lorg/apache/rocketmq/store/MessageExtBrokerInner;Lorg/apache/rocketmq/store/AppendMessageCallback;)Lorg/apache/rocketmq/store/AppendMessageResult; (7 bytes) @ 0x00007ff489b778e8 [0x00007ff489b77680+0x0000000000000268]
J 4657 c2 org.apache.rocketmq.store.CommitLog.putMessage(Lorg/apache/rocketmq/store/MessageExtBrokerInner;)Lorg/apache/rocketmq/store/PutMessageResult; (742 bytes) @ 0x00007ff489bcd930 [0x00007ff489bcce80+0x0000000000000ab0]
J 4692 c2 org.apache.rocketmq.store.DefaultMessageStore.putMessage(Lorg/apache/rocketmq/store/MessageExtBrokerInner;)Lorg/apache/rocketmq/store/PutMessageResult; (402 bytes) @ 0x00007ff489bd9730 [0x00007ff489bd9600+0x0000000000000130]
J 4410 c2 org.apache.rocketmq.broker.processor.SendMessageProcessor.sendMessage(Lio/netty/channel/ChannelHandlerContext;Lorg/apache/rocketmq/remoting/protocol/RemotingCommand;Lorg/apache/rocketmq/broker/mqtrace/SendMessageContext;Lorg/apache/rocketmq/common/protocol/header/SendMessageRequestHeader;)Lorg/apache/rocketmq/remoting/protocol/RemotingCommand; (482 bytes) @ 0x00007ff489b091e4 [0x00007ff489b07d60+0x0000000000001484]
J 4136 c2 org.apache.rocketmq.broker.processor.SendMessageProcessor.processRequest(Lio/netty/channel/ChannelHandlerContext;Lorg/apache/rocketmq/remoting/protocol/RemotingCommand;)Lorg/apache/rocketmq/remoting/protocol/RemotingCommand; (103 bytes) @ 0x00007ff489a9b9b4 [0x00007ff489a9b800+0x00000000000001b4]
J 3931 c2 org.apache.rocketmq.remoting.netty.NettyRemotingAbstract$1.run()V (233 bytes) @ 0x00007ff48999e5e4 [0x00007ff48999e520+0x00000000000000c4]
J 3978 c2 org.apache.rocketmq.remoting.netty.RequestTask.run()V (17 bytes) @ 0x00007ff4899e2094 [0x00007ff4899e2040+0x0000000000000054]
J 3802 c2 java.util.concurrent.FutureTask.run()V [email protected] (123 bytes) @ 0x00007ff489952308 [0x00007ff4899521e0+0x0000000000000128]
J 4495% c2 java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V [email protected] (187 bytes) @ 0x00007ff489b52464 [0x00007ff489b522c0+0x00000000000001a4]
j java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5 [email protected]
j java.lang.Thread.run()V+11 [email protected]
v ~StubRoutines::call_stub
V [libjvm.so+0x8abc52] JavaCalls::call_helper(JavaValue*, methodHandle const&, JavaCallArguments*, Thread*)+0x412
V [libjvm.so+0x8aa170] JavaCalls::call_virtual(JavaValue*, Handle, Klass*, Symbol*, Symbol*, Thread*)+0x1d0
V [libjvm.so+0x95248a] thread_entry(JavaThread*, Thread*)+0x8a
V [libjvm.so+0xd94138] JavaThread::thread_main_inner()+0x108
V [libjvm.so+0xd942be] JavaThread::run()+0x13e
V [libjvm.so+0xc05392] thread_native_entry(Thread*)+0xf2
C [libpthread.so.0+0x7ea5] start_thread+0xc5
siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007ff277130000
Register to memory mapping:
RAX=0x00000006c146e0e5 is pointing into object: 0x00000006c146dee8
[B
{0x00000006c146dee8} - klass: {type array byte}
- length: 4194312
RBX=0x00000000000001ed is an unknown value
RCX=0x00000000000001ed is an unknown value
RDX=0xffffffffffffffcb is an unknown value
RSP=0x00007ff2932c04f0 is pointing into the stack for thread: 0x00007ff2c0010770
RBP=0x00007ff2932c04f0 is pointing into the stack for thread: 0x00007ff2c0010770
RSI=0x00007ff2771301e0 is an unknown value
RDI=0x00000006c146e0d8 is pointing into object: 0x00000006c146dee8
[B
{0x00000006c146dee8} - klass: {type array byte}
- length: 4194312
R8 =0x0000000000000010 is an unknown value
R9 =0x00000006c146dee8 is an oop
[B
{0x00000006c146dee8} - klass: {type array byte}
- length: 4194312
R10=0x00007ff481c71400 is at begin+0 in a stub
StubRoutines::unsafe_arraycopy [0x00007ff481c71400, 0x00007ff481c7143b[ (59 bytes)
R11=0x00007ff277130000 is an unknown value
R12=0x0000000000000000 is an unknown value
R13=0x00000007380184d8 is an oop
java.nio.DirectByteBuffer
{0x00000007380184d8} - klass: 'java/nio/DirectByteBuffer'
R14=0x0000000000000000 is an unknown value
R15=0x00007ff2c0010770 is a thread
有了导致崩溃的线程堆栈信息,开始求助rocketmq开源社区,同时进行排查错误日志
通过“java.nio.DirectByteBuffer.put([BII)Ljava/nio/ByteBuffer”索引到关键点1(java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code - Stack Overflow) 相关错误说明
public ByteBuffer put(byte x) {
//通过unsafe 调用本地函数操作内存
unsafe.putByte(ix(nextPutIndex()), ((x)));
return this;
}
stack overflow 上的重要信息摘录,谷歌翻译后:
更多细节:
sun.misc.Unsafe 是 JDK 私有 API,允许直接从 Java 访问本机内存。该 API 是 Direct ByteBuffers,尤其是 MappedByteBuffers 的基础。
在某些情况下,访问文件的内存映射区域可能会导致操作系统级异常,即 SIGBUS。典型的例子是:(1)在底层文件被截断后访问内存映射缓冲区。
(2)网络驱动器上的文件已映射到内存,并在网络连接丢失后访问映射的缓冲区。
(3)尝试写入映射到 tmpfs 文件系统上的文件的页面会导致内存不足(默认情况下,tmpfs 空间受总 RAM 的 50% 的限制)。
HotSpot JVM 无法有效地预先检测这些问题。它将 Unsafe.getX / putX 调用编译为简单的内存访问指令。额外的检查以查看内存区域是否有效将过于昂贵。相反,JVM 处理 SIGBUG 信号。如果它看到错误发生在 Unsafe 调用中,它会将 InternalError 发布到当前线程并继续执行。
IOException 会更合适,但 JVM 不能抛出它或任何其他异常,因为 ByteBuffer 公共契约不允许其 get/put 方法抛出任何异常。
如果在 JIT 编译的方法中 Unsafe 内存访问失败,JVM 不会立即抛出异常(同样,对于如此热的 ByteBuffer API 来说成本太高了)。相反,它将异步 InternalError 发布到当前线程。这意味着错误实际上会在最近的本地方法或对 VM 运行时的最近调用处抛出。因此,错误消息中出现了“最近”一词。
后续会用到的关键信息已经标红
得到mq开源社区的回复: jdk版本10并不是LTS版本,mq版本太低,使用ycrash进行分析。
这里简单说下ycrash,国外开发的一款分析服务器各项指标的工具,说是能分析出各项异常的原因,原理就是通过yc-agent采集服务器和应用产生的日志信息以及各项指标,传输到yc-server进行分析并给出分析报告,缺点是yc-agent需要手工触发采集或者集成下列监控工具(Promethus、AppDynamics、Grafana),等问题产生调用yc-agent API进行采集,还有一个测试版本的m3模式可以定时采集信息,为了不影响应用采集的信息少了一部分,总体来说比较鸡肋。
前几天没有解决问题开始怀疑是ecs机器有问题,开始迁移新的ecs,迁移后过了两天消停日子,(当时部分不是特别依赖该MQ的应用未完全迁移过来),后来所有应用迁移完成又出现同样的问题,开始怀疑是不是其中的一个应用MQ使用不当造成的,强行停用了部分应用,然而还是出现同样问题,彻底崩溃......
干了几碗心灵鸡汤后,我还没有放弃、我能行~
1、生成并分析core dump文件
Java Linux Core Dump分析 - 芥为之舟
2、翻查linux-log What are Linux Logs? How to View Them, Most Important Directories, and More – Stackify
3、电脑是不是中毒了,排查异常进程,了解每个进程的作用
Linux监控磁盘IO及线程读写状态 并用日志记录_TheRainManFSA的博客-CSDN博客
4、了解mq的消息存储设计
https://github.com/apache/rocketmq/blob/master/docs/cn/design.md
机缘巧合又排查了一遍/var/log/cron日志
0 * * * * find /data/rocketmq_data/*/store/commitlog -mtime +0 -type f |awk '{print "> "$1}'|bash
刚开始以为只是查看24小时前的日志,所以没关心,后来才知道是删除24小时前的commitlog
rocketmq生产者和消费者的日志分开存储:
commitlog主要是用来持久化存储producer生产的消息,只要不删,mq的消息就不会丢失,我们可以选择同步或者异步两种方式将PageCache中的数据持久化到磁盘文件
consumerQueue存储了commitlog当中消息的offset和tag和msgSize信息,相当于目录可以快速检索实体消息
indexFile索引文件主要用来存储消息的消费记录,可以通过topic/key和时间段快速检索消息,其中的索引数据包含了Key Hash/CommitLog Offset/Timestamp/NextIndex offset 这四个字段,通过commitlog offset从commitlog中读取消息实体
结合这个删除commitlog的定时任务、mq的消息存储原理、jvm崩溃的时间点(大多都是整点)
以及标红的报错关键点(1)在底层文件被截断后访问内存映射缓冲区。
分析得出是这个清除commitlog节省存储空间的定时任务导致的。建议使用mq自带的清空日志方式,修改broker配置文件:
Broker 配置
参数名 | 默认值 | 说明 |
---|---|---|
listenPort | 10911 | 接受客户端连接的监听端口 |
namesrvAddr | null | nameServer 地址 |
brokerIP1 | 网卡的 InetAddress | 当前 broker 监听的 IP |
brokerIP2 | 跟 brokerIP1 一样 | 存在主从 broker 时,如果在 broker 主节点上配置了 brokerIP2 属性,broker 从节点会连接主节点配置的 brokerIP2 进行同步 |
brokerName | null | broker 的名称 |
brokerClusterName | DefaultCluster | 本 broker 所属的 Cluser 名称 |
brokerId | 0 | broker id, 0 表示 master, 其他的正整数表示 slave |
storePathRootDir | $HOME/store/ | 存储根路径 |
storePathCommitLog | $HOME/store/commitlog/ | 存储 commit log 的路径 |
mappedFileSizeCommitLog | 1024 * 1024 * 1024(1G) | commit log 的映射文件大小 |
deleteWhen | 04 | 在每天的什么时间删除已经超过文件保留时间的 commit log |
fileReservedTime | 72 | 以小时计算的文件保留时间 |
brokerRole | ASYNC_MASTER | SYNC_MASTER/ASYNC_MASTER/SLAVE |
flushDiskType | ASYNC_FLUSH | SYNC_FLUSH/ASYNC_FLUSH SYNC_FLUSH 模式下的 broker 保证在收到确认生产者之前将消息刷盘。ASYNC_FLUSH 模式下的 broker 则利用刷盘一组消息的模式,可以取得更好的性能。 |
至此该问题已经得到解决!
疑问点:为何会用到24小时前的消息?