系统事件:
周1一大早接到某保险客户电话,说某套系统源端OGG捕获进程ABENDED了,并且延时达34小时之久。考虑到目标端ODS数据仓库层需要实时同步过来的数据,火速赶往之,去看看到底是什么情况导致的。
查看报错信息:
GGSCI> info all
Program Status Group Lag Time Since Chkpt
MANAGER RUNNING
EXTRACT ABENDED EDBCONTR 00:00:12 35:24:15
EXTRACT RUNNING PDBCONTR 00:00:00 00:00:08
EXTRACT RUNNING PODSDBSY 00:00:00 00:00:08
尝试重新start ,进程还是ABENDED,查看具体错误信息:
GGSCI > view report EDBCONTR
***********************************************************************
Oracle GoldenGate Capture for Oracle
Version 10.4.0.19 Build 002
Linux, IA64, 64bit (optimized), Oracle 10 on Sep 23 2009 16:34:26
Copyright (C) 1995, 2009, Oracle and/or its affiliates. All rights reserved.
Starting at 2014-06-30 11:30:21
***********************************************************************
--More--(89%)
2014-06-30 11:31:51 GGS ERROR 500 Could not find archived log for sequen
ce 101258 thread 2 under default destinations SQL
ved_log WHERE sequence# = :ora_seq_no AND thread# = :ora_thread AND
resetlogs_id = :ora_resetlog_id AND archived = 'YES' AND d
eleted = 'N>, error retrieving redo file name for sequence 101258, archived = 1,
use_alternate = 0Not able to establish initial position for sequence 101258, rb
a 2306576.
2014-06-30 11:31:51 GGS ERROR 190 PROCESS ABENDING.
从报错信息来看,捕获进程之所以起不来,是因为找不到 节点2上 sequence 为101258对应的归档日志。
结合以往的经验,给我的第一预感就是节点2上101258对应的归档日志已经不存在了。
检查归档日志:
检查节点2上的归档日志,发现最早的一个归档日志号为101316. sequence 为101258已经被删除掉了。
经过和客户的确认,该系统归档日志默认保留4天,正常情况来讲,35小时的延时似乎属于4天内的范围内,怎么会出现Could not find archived log 这一错误呢?
继续分析:
首先确认下sequence 为101258对应的归档日志是哪天生成的?
SQL>select * from v$archived_log where sequence#=101258; select * from v$log_history; select * from v$loghist;
175079 850987275 1 2 101258 525781 2008-4-26 0:40:34 653013634 49178294895 2014-6-23 9:00:03 49178520498 2014-6-23 9:21:14 33779 512 ARCH ARCH NO YES NO YES D 2014-6-23 9:21:15 NO NO NO 5 2 3971134011 NO NO NO
结果为101258号归档的产生时间为 2014-6-23 9:21:14,对照归档日志保留4天的策略,23号产生的归档日志的确是不存在的。
可问题是捕获进程延时才35小时(推算下进程失败的时间为6月29号凌晨0点左右),可为什么启动后需要去找6月23号的归档日志呢?以下的检查帮我找到了原因:
GGSCI > info edbcontr,showch
EXTRACT EDBCONTR Last Started 2014-06-19 00:43 Status ABENDED
Checkpoint Lag 00:00:12 (updated 35:24:41 ago)
Log Read Checkpoint Oracle Redo Logs
2014-06-29 00:20:16 Thread 1, Seqno 73918, RBA 28440336
Log Read Checkpoint Oracle Redo Logs
2014-06-29 00:20:15 Thread 2, Seqno 101361, RBA 8411200
Current Checkpoint Detail:
Read Checkpoint #1
Oracle RAC Redo Log
Startup Checkpoint (starting position in the data source):
Thread #: 1
Sequence #: 73704
RBA: 92702736
Timestamp: 2014-06-19 00:39:07.000000
SCN: 11.1862969941 (49107610197)
Redo File: Not Avaliable
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 73918
RBA: 28439056
Timestamp: 2014-06-29 00:20:16.000000
SCN: 11.2033818536 (49278458792)
Redo File: Not Avaliable
Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 73918
RBA: 28440336
Timestamp: 2014-06-29 00:20:16.000000
SCN: 11.2033818540 (49278458796)
Redo File: Not Available
Read Checkpoint #2
Oracle RAC Redo Log
Startup Checkpoint (starting position in the data source):
Thread #: 2
Sequence #: 101180
RBA: 67709456
Timestamp: 2014-06-19 00:39:07.000000
SCN: 11.1862969945 (49107610201)
Redo File: +RECO/cjww/onlinelog/group_10.652.669224139
Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 2
Sequence #: 101258
RBA: 2306576
Timestamp: 2014-06-23 09:02:36.000000
SCN: 11.1933686105 (49178326361)
Redo File: Not Avaliable
Current Checkpoint (position of last record read in the data source):
Thread #: 2
Sequence #: 101361
RBA: 8411200
Timestamp: 2014-06-29 00:20:15.000000
SCN: 11.2033818386 (49278458642)
Redo File: Not Available
Write Checkpoint #1
GGS Log Trail
Current Checkpoint (current write position):
Sequence #: 9879
RBA: 96417801
Timestamp: 2014-06-29 00:20:28.803436
Extract Trail: ./dirdat/n1
我们知道,当OGG捕获进程重启后,内部会进行Recovery 操作,会去读取恢复检查点(Recovery Checkpoint )指定的归档日志,从上面的结果显示,该日志号为 Sequence #: 101258。
常规处理方法:
遇到此问题常规的处理方法就是通过RMAN将节点2上从101258号归档日志至101315号归档日志恢复出来,恢复完成后直接重启捕获进程即可。
但客户这边的实际情况是23号的归档日志已经被删除掉了,没有任何备份了。
常规处理方法看来是行不通的。
实际处理方法:
Recovery Checkpoint 对应的Timestamp: 2014-06-23 09:02:36,而Current Checkpoint 对应的 Timestamp: 2014-06-29 00:20:15,为什么这2个checkpoint时间相差这么大呢??
同客户确认,23号到29号的同步一直是正常的。
为此,我们的处理方法是:将Recovery Checkpoint 对应的Timestamp改成
Current Checkpoint 对应的Timestamp,均为2014-06-29 00:20:15
具体操作步骤:
GGSCI>alter extract EDBCONTR ,tranlog ,begin 2014-06-29 00:20:15
GGSCI>start EDBCONTR
进程能够正常起来了,但该过程中陆续有挂掉的现象,再重新start下仍然可以继续运行,主要报以下2个错误:
2014-06-30 12:22:37 GGS ERROR 182 OCI Error describe for query SELECT *
FROM "YXXT"."SYS_EXPORT_SCHEMA_01" (bad syntax) (status = 942-ORA-00942: table o
r view does not exist
2014-06-30 12:25:01 GGS ERROR 109 ORA-01000: maximum open cursors exceed
从报错的情况来看,进程是在29号晚上0点左右abended掉的,而0点左右系统在执行expdp操作,故上述报错是进程abended的罪魁祸首。
错误1:SYS_EXPORT_SCHEMA_01这类表是业务用户在执行expdp导数据生成的,为防止这个错误,我通过在捕获进程参数文件里面添加tableexclude参数来排除掉这类表;
错误2:超过系统最大游标数,该错误有可能是错误1引起的,在expdp过程中系统超过了最大游标数,故报该错误。
总结:
1: 通过手工修改 Recovery Checkpoint Timestamp ,和Current Checkpoint Timestamp 保持一致,保证进程重启后,能够跳过去读Sequence #: 101258号日志,而是让进程去读取Log Read Checkpoint对应的日志(2014-06-29 00:20:15 Thread 2, Seqno 101361, RBA 8411200),从而保证进程的后续捕获并追平延时。至于为什么这2个时间相差这么长,怀疑是不是23号存在某个事务,一直到29号都没有commit导致的?当然需要测试验证下。
2: 通过添加tableexclude 参数排除expdp操作产生的SYS_EXPORT_SCHEMA_01类型的表,防止进程异常abended.
原文:http://blog.sina.com.cn/s/blog_aa84cfe40102uwa3.html