一次生产环境死锁问题的排查笔记

背景

订单处理系统,线上单据转换到系统单据,需要数据库Sequence表生成自增序列号作为订单号码。
用户反馈单据保存延迟严重,页面单据修改速度很慢,会提示失败。

排查过程

首先查看了服务日志:

Caused by: com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in com/xxx/xxx/dal/sqlmap/mapping/sequence/sequence-number-mapping.xml.
--- The error occurred while applying a parameter map.
--- Check the UPDATE-NEXT_SEQUENCE_NUMBER-BY-TYPE-InlineParameterMap.
--- Check the statement (update failed).
--- Cause: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction
    ... 33 more

发现更新Sequence等锁超时,先猜测是数据库性能问题,但是发现性能消耗和IO都正常,也没有明显相关的慢SQL。
继续排查,看了下数据库正在处理的进程:

mysql> show processlist;
+-------+-------+------------------+-----------+---------+------+----------+-------------------------------------------------------------------------------------------+
| Id    | User  | Host             | db        | Command | Time | State    | Info                                                                                      |
+-------+-------+------------------+-----------+---------+------+----------+-------------------------------------------------------------------------------------------+
| 32690 | xxx | 192.168.xx.xx:35738 | wheeljack | Sleep   |    16 |          | NULL                                                                                      |
| 32869 | xxx | 192.168.xx.xx:36030 | wheeljack | Query   |    16 | updating | update sequence_number set next_sequence = 36948685 where  sequence_type = 'REFUND_ORDER' |
+-------+-------+------------------+-----------+---------+------+----------+-------------------------------------------------------------------------------------------+

发现这两条相关的进程,其中32869是要更新Sequence的进程,还有一条sleep状态可疑的进程32690。经过多次show processlist发现一个规律,每次有更新Sequence等锁的时候都会有一条执行时长相同的Sleep进程,猜想应该是发生死锁了。

通过数据库命令SHOW ENGINE INNODB STATUS,查看死锁情况:

| InnoDB |      | 
=====================================
2022-02-25 18:00:30 7f4c62e30700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 21 seconds
------------
TRANSACTIONS
------------
---TRANSACTION 214086751, ACTIVE 5.833 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 360, 1 row lock(s), undo log entries 1
LOCK BLOCKING MySQL thread id: 32690 block 32869
MySQL thread id 32869, OS thread handle 0x7f4c65462700, query id 13251348 192.168.32.234 xxx updating
update sequence_number set next_sequence = 36948685 where  sequence_type = 'REFUND_ORDER'
------- TRX HAS BEEN WAITING 6 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 190 page no 3 n bits 80 index `PRIMARY` of table `xxx`.`sequence_number` trx id 214086751 lock_mode X locks rec but not gap waiting
Record lock, heap no 9 PHYSICAL RECORD: n_fields 5; compact format; info bits 0
------------------
---TRANSACTION 214086750, ACTIVE 5.840 sec
4 lock struct(s), heap size 1184, 1 row lock(s), undo log entries 13
MySQL thread id 32690, OS thread handle 0x7f4bbc9b8700, query id 13251347 192.168.32.234 gaozi cleaning up
--------
----------------------------
END OF INNODB MONITOR OUTPUT
============================

显示 进程32690阻塞了要更新Sequence的进程32869,阻塞进程却在sleep。。疑惑了半天,猜想可能是数据库锁和代码锁竞争了,为了验证猜想,去排查线上服务的运行情况。

服务器通过Arthas(阿里开源工具,5星推荐!线上问题排查利器)命令thread -b发现果然有被阻塞的进程:

"org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-4" Id=19 cpuUsage=0.0% deltaTime=0ms time=16271ms 
BLOCKED on com.xxx.wheeljack.biz.manager.SequenceNumberManager@723ad78e 
owned by "org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-9" Id=24
    at 
com.xxx.wheeljack.biz.manager.SequenceNumberManager.getSequenceByType(SequenceNumberManager.java:50)
    -  blocked on com.xxx.wheeljack.biz.manager.SequenceNumberManager@723ad78e
... 33 more

显示进程19被进程24阻塞,位置在SequenceNumberManager的第50行,接着来看看进程24的情况,命令:thread 24

"org.springframework.scheduling.quartz.SchedulerFactoryBean#0_Worker-9" Id=24 cpuUsage=0.0% deltaTime=0ms time=16112ms RUNNABLE (in native)
com.xxx.wheeljack.dal.daoImpl.SequenceNumberDao.updateNextSequenceByType(SequenceNumberDao.java:28)

显示进程24在执行数据库更新,并且和进程19等待时间相近,基本可以断定分别对应到数据库的两个进程,一个持有代码锁在等待数据库锁来更新Sequence表,一个持有数据库锁却在等待代码锁。

再来看看代码验证猜测,SequenceNumberManager的50行:

    public synchronized Long getSequenceByType(SequenceTypeEnum type) {
        //省略
    }

果然,是一个synchronized修饰的方法。

再来看看业务场景为什么会发生锁竞争,找到调用获取Sequence的代码:

for (int i = 0; i < list.size(); i++) {
  AfterOrderDo afterOrderDo = list.get(i);
  String asOrderId = sequenceNumberManager.getSequenceByType(SequenceTypeEnum.REFUND_ORDER);
  afterOrderDo.setAsOrderId(asOrderId);
  afterOrderDao.insert(afterOrderDo);
}

以上代码在事务内,第3行调用获取Sequence的方法。多线程调用,每个线程内又循环处理单据列表,获取Sequence,由此断定发生死锁的步骤:

1.线程a进入事务处理单据a1;
2.线程a得到Sequence表的锁并成功获取Sequence;
3.线程b进入事务处理单据b1;
4.线程b成功获取代码块synchronized锁,申请获取Sequence表时被迫等待;
5.线程a开始处理单据a2,等待代码块synchronized锁。

由此两个线程相互等待对方持有的锁,直至一方等待超时。

找到问题以后解决方式也就简单了,将获取Sequence的方法放在事务外部,单独处理,避免锁竞争。

你可能感兴趣的:(一次生产环境死锁问题的排查笔记)