上一篇 / 下一篇 2011-08-12 22:33:11 / 个人分类:Oracle
http://www.cnblogs.com/killkill/archive/2009/06/30/1514015.html
近日在查询某项日志的时候,发现查询非常缓慢,根据以往的经验这是由于某个日志表过大引起的,为了加快查询,决定将大部分的历史数据迁移到另外一个表中,本文主要记录删除这个大表的过程,就解决问题而言还有很多方法,但是本文侧重点在于如何处理大数据量删除的操作,及其善后工作。
首先看看我们要做数据迁移的表所占的空间:
SQL> select segment_name , bytes/1048576 MB 2 from user_segments 3 order by bytes 4 / SEGMENT_NAME MB -------------------------------------------------- ---------- TAB_NODE_PARAM .375 SYS_LOB0000053047C00004$$ .4375 PK_TAB_FLOW_HIS 1 PK_TAB_NODE_HIS 2 TAB_TASK_HIS 2 TAB_FLOW_HIS 4 TAB_NODE_HIS 17 PK_TAB_APPEXCHANGE_LOG 152 TAB_APPEXCHANGE_LOG 768
看看历史数据大概所占的空间比例:
SQL> select count(*) from tab_appexchange_log 2 where receive_time >= to_date('2009-01-01','yyyy-mm-dd') 3 union all 4 select count(*) from tab_appexchange_log 5 where receive_time < to_date('2009-01-01','yyyy-mm-dd') SQL> / COUNT(*) ---------- 1584298 4037710
从结果来看,的确存在很多历史数据,我们将 2009 年之前的数据迁移到别的表中。
SQL> create table tab_appexchange_log_20090101 2 as 3 select * from tab_appexchange_log 4 where receive_time < to_date('2009-01-01','yyyy-mm-dd') SQL> / Table created. SQL> select count(*) from tab_appexchange_log_20090101 2 / COUNT(*) ---------- 4037710
数据已经迁移一份到新的表里面了,可以删除原表中的旧数据了。
SQL> delete tab_appexchange_log where receive_time < to_date('2009-01-01','yyyy-mm-dd') 2 / delete tab_appexchange_log where receive_time < to_date('2009-01-01','yyyy-mm-dd') * ERROR at line 1: ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS2'
这里出现了一个 ORA-30036的错误,大概意思是undo表空间不足,看来是要删除的数据太多了,于是考虑化整为零,删一点,commit一下。
SQL> delete tab_appexchange_log where receive_time < to_date('2007-01-01','yyyy-mm-dd') 2 / 0 rows deleted. SQL> delete tab_appexchange_log where receive_time < to_date('2008-01-01','yyyy-mm-dd') 2 / 1586227 rows deleted. SQL> commit; Commit complete. SQL> delete tab_appexchange_log where receive_time < to_date('2008-06-01','yyyy-mm-dd') 2 / 813019 rows deleted. SQL> commit; Commit complete. SQL> delete tab_appexchange_log where receive_time < to_date('2009-01-01','yyyy-mm-dd') 2 / 1638464 rows deleted. SQL> commit; Commit complete.
可以预知,现在表里面的数据应该只有 15***** 条,以下证实了这个想法。
SQL> select count(*) from tab_appexchange_log 2 / COUNT(*) ---------- 1584522
我们再看看表占用空间的情况:
SQL> select segment_name , bytes/1048576 MB 2 from user_segments 3 order by bytes asc; SEGMENT_NAME MB -------------------------------------------------- ---------- TAB_NODE_PARAM .375 SYS_LOB0000053047C00004$$ .4375 PK_TAB_FLOW_HIS 1 PK_TAB_NODE_HIS 2 TAB_TASK_HIS 2 TAB_FLOW_HIS 4 TAB_NODE_HIS 17 PK_TAB_APPEXCHANGE_LOG 152 TAB_APPEXCHANGE_LOG_20090101 552 TAB_APPEXCHANGE_LOG 768
看来原表所占的空间还没有释放,这个涉及一个HWM的说法,还有可以观察到在TAB_APPEXCHANGE_LOG上面的一个索引 PK_TAB_APPEXCHANGE_LOG 空间没有回收。
我们回过头来看看刚才的操作对undo表空间产生的压力:
TO_CHAR(BEGIN_TIME, TO_CHAR(END_TIME,'Y UNDOBLKS ------------------- ------------------- ---------- 2009-06-30 13:39:32 2009-06-30 13:46:35 96164 2009-06-30 13:29:32 2009-06-30 13:39:32 89316 2009-06-30 13:19:32 2009-06-30 13:29:32 209885 2009-06-30 13:09:32 2009-06-30 13:19:32 105 2009-06-30 12:59:32 2009-06-30 13:09:32 125 2009-06-30 12:49:32 2009-06-30 12:59:32 74 2009-06-30 12:39:32 2009-06-30 12:49:32 80 2009-06-30 12:29:32 2009-06-30 12:39:32 72 2009-06-30 12:19:32 2009-06-30 12:29:32 76 2009-06-30 12:09:32 2009-06-30 12:19:32 89 2009-06-30 11:59:32 2009-06-30 12:09:32 126
呵呵,刚才DML的操作对undo表空间产生的压力还是非常可观的。
我们开始回收表所占的空间:
SQL> alter table TAB_APPEXCHANGE_LOG enable row movement; Table altered. SQL> alter table TAB_APPEXCHANGE_LOG shrink space; Table altered.
由于这么大规模的删除,会导致很多索引中存在很多标记为删除的节点,占地方还不能发挥作用,因此重建索引将这些无用节点清理出去才是上策。
SQL> analyze index PK_TAB_APPEXCHANGE_LOG validate structure; Index analyzed. SQL> select height, name , LF_ROWS_LEN,LF_ROWS,DEL_LF_ROWS,DEL_LF_ROWS_LEN from index_stats; HEIGHT NAME LF_ROWS_LEN LF_ROWS DEL_LF_ROWS DEL_LF_ROWS_LEN ---------- ------------------------------ ----------- ---------- ----------- --------------- 3 PK_TAB_APPEXCHANGE_LOG 88905533 5621608 4036429 63558680 SQL> alter index PK_TAB_APPEXCHANGE_LOG rebuild online; Index altered. SQL> analyze index PK_TAB_APPEXCHANGE_LOG validate structure; Index analyzed. SQL> select height, name , LF_ROWS_LEN,LF_ROWS,DEL_LF_ROWS,DEL_LF_ROWS_LEN from index_stats; HEIGHT NAME LF_ROWS_LEN LF_ROWS DEL_LF_ROWS DEL_LF_ROWS_LEN ---------- ------------------------------ ----------- ---------- ----------- --------------- 3 PK_TAB_APPEXCHANGE_LOG 25347109 1585195 0 0
如果现在我们再查一下索引所占的空间的话,会发现所占空间是没有减少的,和表一样,同样也是需要收缩一下空间。
SQL> alter index PK_TAB_APPEXCHANGE_LOG shrink space; Index altered. SQL> select segment_name , bytes/1048576 MB 2 from user_segments 3 order by bytes 4 / SEGMENT_NAME MB -------------------------------------------------- ---------- TAB_NODE_PARAM .375 SYS_LOB0000053047C00004$$ .4375 PK_TAB_FLOW_HIS 1 TAB_TASK_HIS 2 PK_TAB_NODE_HIS 2 TAB_FLOW_HIS 4 TAB_NODE_HIS 17 PK_TAB_APPEXCHANGE_LOG 28.1875 TAB_APPEXCHANGE_LOG 220.8125 TAB_APPEXCHANGE_LOG_20090101 552
自此一次删除大表的操作完成,回顾一下,主要有以下几点:
1。分批删除数据,按批次提交。
2。回收表所占的空间。
3。处理由于表收缩所导致的索引失效问题。
http://psoug.org/reference/truncate.html
我们都知道drop table, truncate table时都会先做一次checkpoint,将被删除对象的脏块写入磁盘。
客户有一套系统,Oracle 9.2.0.8,需要做数据迁移,由于种种原因,采用的是逻辑迁移的方式。由于库比较大,超过了1.5T,而停机时间又有限,因此在正式迁移之前需要做大量的测试,测试的目的,一方面是看迁移流程上是否存在问题,另一方面是看迁移的时候,哪个地方会存在性能瓶颈,并进行优化,同时估算实施迁移时间。
第一次测试后,需要把测试产生的大量用户及其对象全部删除,删除用的是drop user username cascade。不幸的是这种方式删除得相当地慢。一个9000多个表的用户,删除了1个半小时才删除了4000多个表。为什么这么慢?有没有办法提高速度?
drop table既然要做checkpoint,那么在db cache非常大的情况下,这需要消耗的时间是比较长的。如果能够减少这个时间无疑将大幅提高速度。首先尝试做一次checkpoint,将buffer cache全部刷新出去:
alter system checkpoint;
alter session set events 'immediate trace name flush_cache level 1';
发现没什么效果。
由于db_cache_size有50GB左右,db_keep_cache_size有6G左右。重新设置参数,将db_keep_cache_size设为0,将db_cache_size设为200M,重启一下数据库,重新执行删除用户的操作,操作很快完成。
在另一次同样的过程中,采用同样的修改参数的方式,效果同样非常明显。
这是个简单的案例,与君共享。
http://www.laoxiong.net/drop-table-and-db-cache.html
http://caihorse.iteye.com/blog/614570
http://psoug.org/reference/truncate.html
« Control SCN of Undo Segments | Blog首页 | Jonathan Lewis: Can we have a sensible debate ? »
create or replace procedure delBigTab ( p_TableName in varchar2, p_Condition in varchar2, p_Count in varchar2 ) as pragma autonomous_transaction; n_delete number:=0; begin while 1=1 loop EXECUTE IMMEDIATE 'delete from '||p_TableName||' where '||p_Condition||' and rownum <= :rn' USING p_Count; if SQL%NOTFOUND then exit; else n_delete:=n_delete + SQL%ROWCOUNT; end if; commit; end loop; commit; DBMS_OUTPUT.PUT_LINE('Finished!'); DBMS_OUTPUT.PUT_LINE('Totally '||to_char(n_delete)||' records deleted!'); end;以下是删除过程及时间:
SQL> create or replace procedure delBigTab 2 ( 3 p_TableName in varchar2, 4 p_Condition in varchar2, 5 p_Count in varchar2 6 ) 7 as 8 pragma autonomous_transaction; 9 n_delete number:=0; 10 begin 11 while 1=1 loop 12 EXECUTE IMMEDIATE 13 'delete from '||p_TableName||' where '||p_Condition||' and rownum <= :rn' 14 USING p_Count; 15 if SQL%NOTFOUND then 16 exit; 17 else 18 n_delete:=n_delete + SQL%ROWCOUNT; 19 end if; 20 commit; 21 end loop; 22 commit; 23 DBMS_OUTPUT.PUT_LINE('Finished!'); 24 DBMS_OUTPUT.PUT_LINE('Totally '||to_char(n_delete)||' records deleted!'); 25 end; 26 / Procedure created. SQL> set timing on SQL> select min(NUMDLFLOGGUID) from HS_DLF_DOWNLOG_HISTORY; MIN(NUMDLFLOGGUID) ------------------ 11000000 Elapsed: 00:00:00.23 SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 11100000','10000'); PL/SQL procedure successfully completed. Elapsed: 00:00:18.54 SQL> select min(NUMDLFLOGGUID) from HS_DLF_DOWNLOG_HISTORY; MIN(NUMDLFLOGGUID) ------------------ 11100000 Elapsed: 00:00:00.18 SQL> set serveroutput on SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 11200000','10000'); Finished! Totally 96936 records deleted! PL/SQL procedure successfully completed. Elapsed: 00:00:18.61 10万记录大约19s SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 11300000','10000'); Finished! Totally 100000 records deleted! PL/SQL procedure successfully completed. Elapsed: 00:00:18.62 SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 11400000','10000'); Finished! Totally 100000 records deleted! PL/SQL procedure successfully completed. Elapsed: 00:00:18.85 SQL> SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 13000000','10000'); Finished! Totally 1000000 records deleted! PL/SQL procedure successfully completed. Elapsed: 00:03:13.87 100万记录大约3分钟 SQL> exec delBigTab('HS_DLF_DOWNLOG_HISTORY','NUMDLFLOGGUID < 20000000','10000'); Finished! Totally 6999977 records deleted! PL/SQL procedure successfully completed. Elapsed: 00:27:24.69 700万大约27分钟 以上过程仅供参考.