oracle imp过慢的解决办法

版权声明:转载时请以超链接形式标明文章原始出处和作者信息及本声明http://fbirdzp.blogbus.com/logs/98676185.html

性能测试或是压力测试工作的核心是测试环境的准备,其中就经常涉及到大数据量的导入导出。对于一个超过100GB的数据量的imp工作,完全参照生产环境的标准进行正规imp导入,有时候是很痛苦的事情。

测试环境涉及到大数据量的数据导入工作,其实是有一些好的经验的,尤其是在项目关键阶段,我们必须合理利用测试环境数据库安全级别相对较低的特点。

以测试环境如下条件的数据导入为例:

  • Oracle 11g RAC archive mode,归档模式
  • Oracle 11g RAC 仅有两块磁阵盘存储数据,读写性能一般
  • 有一个分区表有超过300GB的数据需要imp导入,每个分区有约5000万条记录
  • 这个表使用range partition(3天1个分区),并且有local索引

 

周五拿到dmp数据文件的时候,下班前就开始imp,后台运行后就闪人了,结果周末两天加上周一共出现了如下几个问题:

  • imp时没有设置buffer大小,结果导致报"imp-"错误提示,imp操作中断
  • imp时RAC各节点平均每2分钟产生1GB的archive log,归档空间扛不住,500GB的归档空间一晚上就满了,imp中断
  • 存放数据的表空间只有一个datafile,且并非bigfile,满了,imp又中断了
  • imp的速度很慢。两天了,基本上就见着盘很忙,没怎么见着数据成功导入的日志记录
  • 可能因为IO的性能确实很差,通过nmon,vmstat观察到的io表现,每秒写的速度不到20MB,Disk busy已经显示101%,XP20000的盘怂得真纯爷们!

 

后来,通过如下手段显著优化了imp的时间:

  • change RAC archive mode to noarchivelog mode
  • alter table t1 nologging;
  • alter table drop index
  • imp使用更大的buffer


原先三天没有导完的数据,现在半天就搞定了,而且剩下了很多维护archivelog的成本。

metalink上有一篇专门介绍imp调优的文章---《Tuning Considerations When Import Is Slow [ID 93763.1]》。

这里面讲到以下有用的知识点,通过以下几个方面的调整,将会显著提高imp的效率:

1. System级别的改变

  • 创建使用一个大的回滚段替代原有多个小的回滚段,大小是待导入表大小的50%足够。
  • 数据库修改为NOARCHIVELOG mode
  • 创建几组大的redo log size,越大越好,因为redolog越大,日志切换的越少。当看到alert.log里有类似 'Thread 1 cannot allocate new log, sequence 17, Checkpoint not complete'提示信息,这说明你需要更大的redo log size。
  • 如果条件允许,最好将rollback,datafile以及redo log file放在不同的磁盘上,避免IO争抢。
  • 确保imp操作时,没有其他IO操作,减少资源争抢。
  • 确保数据字典表里没有统计信息
  • 检查sqlnet.ora文件里,确保TRACE_LEVEL_CLIENT = OFF
  • 提高DB_BLOCK_SIZE的大小,这个需要全面考量,一旦创建db将不可再修改

 

2.初始化参数init.ora

  • 设置LOG_CHECKPOINT_INTERVAL的值大于redo log的数目,减少checkpoints到最小。
  • 增加SORT_AREA_SIZE大小,如果主机剩余内存足够,可设置5~10倍默认大小。当系统出现paging及swaping,则设置过高
  • 增加db_block_buffers and shared_pool_size大小

 

个人感觉, 这几个参数可能真正对imp性能影响没有显著影响,默认设置应该足够。只是没有实际测试过。

3. imp参数的使用

  • 使用COMMIT=N,这意味着整个表imp操作结束后进行commit操作。默认是Y,也就是每个buffer结束后commit一次。这意味点背将会前功尽弃!
  • buffer设置大点,一般来说200M就不小了
  • 使用INDEXES=N,导入表数据的时候不更新index,待数据导入后重新维护索引。显著提高imp的效率

 

哪些能显著提高imp的效率?哪些适合测试环境使用,而在安全级别较高的生产环境上无法尝试?自己把握!

当然,oracle10g后推出的datapump (expdp,impdp)相较而言,效率也有很大提升,并引入并行参数,是导入导出操作非常好的选择。


metalink上有一篇专门介绍imp调优的文章---《Tuning Considerations When Import Is Slow [ID 93763.1]》。

TUNING CONSIDERATIONS WHEN THE IMPORT IS SLOW

            =================================================

The Oracle Import process can sometimes take many hours or even days to complete successfully.  Unfortunately, many imports are needed to perform. crash recovery on databases that, for one reason or another, are not functional.  This makes the time needed for the import even more critical.  There are not many ways to speed up a large import, but here are a few basic changes that can reduce the overall time of import.

System Level Changes

--------------------

 - Create and use one large rollback segment, take all other rollback segments offline.  

One rollback segment approximately 50% of the size of the largest table being imported should be large enough.  Import  basically does 'insert into tabX values (',,,,,,,')' for every row in your database, so the rollback generated for insert statements is only the rowid for each row inserted.  Also create the rollback with the minimum 2 extents of equal size.

- Put the database in NOARCHIVELOG mode until the import is complete.

  This will reduce the overhead of creating and managing archive logs. 

For more info on enabling and disabling archive logging see Note:69739.1

 - As with the rollback segment, create several large redo log files, the larger the better. 

The larger the log files, the fewer log switches that are needed.  Check the alert log for messages like 'Thread 1 cannot allocate new log, sequence 17, Checkpoint not complete'.This indicates the log files need to be bigger or you need more of them.

 - If possible, make sure that rollback, table data, and redo log files are all on separate disks.  

This increases throughput by reducing I/O contention.

 - Make sure there is no IO contention occurring. 

If possible, don't run other jobs which may compete with the import for system resources.

 - Make sure there are no statistics on data dictionary tables.

 - Check the sqlnet.ora in the TNS_ADMIN location.  Make sure that

  TRACE_LEVEL_CLIENT = OFF

- Increase DB_BLOCK_SIZE when recreating the database, if possible.  

The larger the block size, the smaller the number of I/O cycles needed.This change is permanent, so consider all effects it will have before changing it.

 For more info on db block sizing see Note:34020.1

 

Init.ora Parameter Changes

--------------------------

 - Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size of the redo log files.  This number is in OS blocks (512 bytes on most Unix systems).  This reduces checkpoints to a minimum (only at log switch time).

 - Increase SORT_AREA_SIZE.  Indexes are not being built yet, but any unique or primary key constraints will be.  The increase depends on what other activity is on the machine and how much free memory is available.Try 5-10 times the normal setting.  If the machine starts swapping and paging, you have set it too high.

- Try increasing db_block_buffers and shared_pool_size.Shared pool holds cached dictionary info and things like cursors,procedures, triggers, etc. Dictionary info. or cursors created on the import's behalf (there may be many since it's always working on a new table) may sometimes clog the pipes. Therefore, this stale stuff sits around until the aging/flush mechanisms kick in on a per-request basis because a request can't be satisfied from the lookaside lists. The

  ALTER SYSTEM FLUSH SHARED_POOL throws out *all* currently unused objects in one operation, hence, defragments the pool.

If you can restart the instance with a bigger SHARED_POOL_SIZE prior to importing, that would definitely help. When it starts to slow down, at least you can see what's going on by doing the following:

  SQL> set serveroutput on size 2000;

  SQL>begin

  SQL> dbms_shared_pool.sizes(2000);

  SQL> end;

  SQL> /

 The dbms_shared_pool is in $ORACLE_HOME/rdbms/admin/dbmspool.sql


Import Options Changes

----------------------

 - Use COMMIT=N.  This will cause import to commit after each object (table), not after each buffer.  This is why one large rollback segment is needed.

 - Use a large BUFFER=1024000 size.  This value also depends on system activity,database size, etc.  Several megabytes is usually enough, but if youhave the memory some can go higher.  Again, check for paging and swapping at the OS level to see if it is too high.  This reduces the number of times the import program has to go to the export file for data.  Each time it fetches one buffer's worth of data.

 - Consider using  INDEXES=N during import. The user defined indexes will be created after the table has   been created and populated, but if the primary objective of the import is to get the data in there as fast as possible,then importing with INDEXES=N will help. The indexes can then be created at a later date when time is not a factor.

 If this approach is chosen, then you will need to use INDEXFILE option to extract the DLL for the index creation or to re-run the import with INDEXES=Y and ROWS=N.

 For more info on extracting the DLL from an export file see  Note:29765.1

 

REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times the export time. 

Large Imports of LOB Data:

--------------------------

 Generally speaking, a good forumla for determining a target elapsed time for a table import versus the elapsed time for the table export is:

  import elapsed time = export elapsed time X 4

 - Eliminate indexes.  This affects total import time significantly.The existance of LOB data requires special consideration.The LOB locator has a primary key that cannot be explicity dropped or ignored during the import process.

 - Make certain that sufficient space in sufficently large contiguous chunks is available to complete the data load. The following should  provide an accurate image of the space available in the target  tablespace:

   alter tablespace mailindx coalesce;

   select bytes from dba_free_space where tablespace_name = 'MAILINDX' order by bytes desc;

   select bytes from dba_free_space where tablespace_name = 'MAILINDX' order by bytes desc;

 Large Imports of LONG Data:

---------------------------

 Importing a table with a LONG column may cause a higher rate of I/O and disk utilization than importing a table without a LONG column. 

 There are no parameters available within IMP to help optimize the import of these data types.


来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/24867586/viewspace-712923/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/24867586/viewspace-712923/

你可能感兴趣的:(oracle imp过慢的解决办法)