数据库出现 enq: TX - row lock contention
今天上午过来,做awr报告数据库出现了很多enq: TX - row lock contention的等待事件,以前从来没有过的,感觉很奇怪。查询gv$session中的等待事件
scrac1/sys@JSCN> select event,count(*) from gv$session group by event;
EVENT COUNT(*)
---------------------------------------------------------------- ----------
SQL*Net message from client 275
Streams AQ: waiting for messages in the queue 2
ASM background timer 2
wait for unread message on broadcast channel 1
gcs remote message 12
ges remote message 2
jobq slave wait 3
rdbms ipc message 34
smon timer 2
pmon timer 2
Streams AQ: qmn slave idle wait 3
class slave wait 2
PX Deq: reap credit 2
PX Deq: Execution Msg 1
Streams AQ: waiting for time management or cleanup tasks 2
Streams AQ: qmn coordinator idle wait 2
DIAG idle wait 2
17 rows selected.
在结果没有出现该等待事件,我们去ash中在看看
scrac1/sys@JSCN> select SAMPLE_TIME,SESSION_ID,USER_ID,SQL_ID,EVENT,CURRENT_OBJ#,CURRENT_FILE#,CURRENT_BLOCK# from GV_$ACTIVE_SESSION_HISTORY
where event like 'enq: TX%' and module='JDBC Thin Client' and rownum<=10; 2
SAMPLE_TIME SESSION_ID USER_ID SQL_ID EVENT CURRENT_OBJ# CURRENT_FILE# CURRENT_BLOCK#
------------------------------ ---------- ---------- ------------- ---------------------------------------------------------------- ------------ ------------- --------------
02-AUG-12 03.46.26.854 PM 3120 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3139 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3158 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3186 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3194 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3197 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3198 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3201 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3212 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
02-AUG-12 03.46.26.854 PM 3213 59 a209wq32zy3jw enq: TX - row lock contention 58359 1024 510
结果发现很多的enq: TX - row lock contention等待事件,都是昨天的,这个是怎么回事,这个需要我们详细查看GV_$ACTIVE_SESSION_HISTORY表中的字段
首先,我们最容易想到的是sql_ID字段,查询
select * from V$SQL where sql_id='a209wq32zy3jw';
你妹啊,竟然是没有,不会啊,才昨天的sql,应该不会被重share pool中删掉啊,想了想,由于我们这个是rac会不会是连得另外一个节点啊
select * from GV_$SQL where sql_id='a209wq32zy3jw';
scrac1/sys@JSCN> select INST_ID,SQL_TEXT from GV_$SQL where sql_id='a209wq32zy3jw';
INST_ID
----------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1
update QUARTZ.TASK_LOG SET TASK_ID = :1, STATUS = :2, RESULT_DESC = :3, START_TIME = :4, END_TIME = :5
where LOG_ID = :6
1
update QUARTZ.TASK_LOG SET TASK_ID = :1, STATUS = :2, RESULT_DESC = :3, START_TIME = :4, END_TIME = :5
where LOG_ID = :6
这个表和sql都很奇怪,之前我们并没有看到过,GV_$ACTIVE_SESSION_HISTORY中还有一个USER_ID=59
scrac1/sys@JSCN> select USERNAME,USER_ID,CREATED from dba_users where USER_ID='59' ;
USERNAME USER_ID CREATED
------------------------------ ---------- ------------
QUARTZ 59 25-JUN-12
查看到底是那个表出现了锁等待
scrac1/sys@JSCN> select * from dba_objects where object_id='58359';
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIM TIMESTAMP STATUS T G S
------------------------------ ------------------------------ ------------------------------ ---------- -------------- ------------------- ------------ ------------ ------------------- ------- - - -
QUARTZ TASK_LOG 58359 58359 TABLE 02-AUG-12 02-AUG-12 2012-08-02:14:33:28 VALID N N N
想起来了,昨天他们一直再用这个用在做定时任务,在不停的测试。所以产生的。到此原因已经查明。
如果是在session中出现此类问题,解决方法有:
解决方法:
1:通过v$session找到BLOCK=1的用户,告知用户提交事务
2:通过sid找到pid,kill掉该进程
3:更改sql语句,SELECT * FROM QRTZ_LOCKS WHERE LOCK_NAME = :1 FOR UPDATE no wait
一般如果是现网中出现了大量类似的问题,排除人为原因,那么就要检查应用了
再看看官网上关于TX - row lock contention类容
http://docs.oracle.com/cd/B19306_01/server.102/b14211/instance_tune.htm#sthref761
10.3.7.2.4 TX enqueue
These are acquired exclusive when a transaction initiates its first change and held until the transaction does a COMMIT or ROLLBACK.
Waits for TX in mode 6: occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
The solution is to have the first session already holding the lock perform a COMMIT or ROLLBACK.
Waits for TX in mode 4 can occur if the session is waiting for an ITL (interested transaction list) slot in a block. This happens when the session wants to lock a row in the block but one or more other sessions have rows locked in the same block, and there is no free ITL slot in the block. Usually, Oracle dynamically adds another ITL slot. This may not be possible if there is insufficient free space in the block to add an ITL. If so, the session waits for a slot with a TX enqueue in mode 4. This type of TX enqueue wait corresponds to the wait event enq: TX - allocate ITL entry.
The solution is to increase the number of ITLs available, either by changing the INITRANS or MAXTRANS for the table (either by using an ALTER statement, or by re-creating the table with the higher values).
Waits for TX in mode 4 can also occur if a session is waiting due to potential duplicates in UNIQUE index. If two sessions try to insert the same key value the second session has to wait to see if an ORA-0001 should be raised or not. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
The solution is to have the first session already holding the lock perform a COMMIT or ROLLBACK.
Waits for TX in mode 4 is also possible if the session is waiting due to shared bitmap index fragment. Bitmap indexes index key values and a range of ROWIDs. Each 'entry' in a bitmap index can cover many rows in the actual table. If two sessions want to update rows covered by the same bitmap index fragment, then the second session waits for the first transaction to either COMMIT or ROLLBACK by waiting for the TX lock in mode 4. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
Waits for TX in Mode 4 can also occur waiting for a PREPARED transaction.
Waits for TX in mode 4 also occur when a transaction inserting a row in an index has to wait for the end of an index block split being done by another transaction. This type of TX enqueue wait corresponds to the wait event enq: TX - index contention.
10.3.7 enqueue (enq:) waits
Enqueues are locks that coordinate access to database resources. This event indicates that the session is waiting for a lock that is held by another session.
The name of the enqueue is included as part of the wait event name, in the form enq: enqueue_type - related_details. In some cases, the same enqueue type can be held for different purposes, such as the following related TX types:
enq: TX - allocate ITL entry
enq: TX - contention
enq: TX - index contention
enq: TX - row lock contention
The V$EVENT_NAME view provides a complete list of all the enq: wait events.
You can check the following V$SESSION_WAIT parameter columns for additional information:
P1 - Lock TYPE (or name) and MODE
P2 - Resource identifier ID1 for the lock
P3 - Resource identifier ID2 for the lock
2012-08-26
补充,今天又出现几百个等待事件了,现在这里等待时间已经到了top event的排名第二了
又按照上述方法查看等待的sql和用户,发现是SYSMAN用户用户造成的等待事件:
sql语句为:
SELECT CURRENT_STATUS
FROM MGMT_CURRENT_AVAILABILITY
WHERE TARGET_GUID IN
(SELECT TARGET_GUID FROM MGMT_TARGETS WHERE EMD_URL = :B1)
ORDER BY TARGET_GUID
FOR UPDATE;
这个sql没看过,google了一下,发现原来可能OEM产生的问题,今天是周末临时过来值班的。
一般周末和周五不到万不得已,不要动数据库,不能出了问题找配合的人很麻烦,等到周一过来,吧OEM关掉再看看。