Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1] |
||
|
||
|
Modified 06-APR-2009 Type HOWTO Status PUBLISHED |
|
In this Document
Goal
Solution
1. Introduction.
2. Export Data Pump: unloading data in "Direct Path" mode.
3. Export Data Pump: unloading data in "External Tables" mode.
4. Import Data Pump: loading data in "Direct Path" mode.
5. Import Data Pump: loading data in "External Tables" mode.
6. How to enforce a specific load/unload method ?
7. Known issues.
@ 8. For Support: Enhancement Requests.
References
Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 11.1.0.6
Oracle Server - Personal Edition - Version: 10.1.0.2 to 11.1.0.6
Oracle Server - Standard Edition - Version: 10.1.0.2 to 11.1.0.6
Enterprise Manager for RDBMS - Version: 10.1.0.2 to 11.1
Information in this document applies to any platform.
Starting with Oracle10g, Oracle Data Pump can be used to move data in and out of a database. Data Pump can make use of different methods to move the data, and will automatically choose the fastest method. It is possible though, to manually enforce a specific method. This document demonstrates how to specify the method with which data will be loaded or unloaded with Data Pump.
Data Pump can use four mechanisms to move data in and out of a database:
The two most commonly used methods to move data in and out of databases with Data Pump are the "Direct Path" method and the "External Tables" method.
1.1. Direct Path mode.
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it.
1.2. External Tables mode.
If data cannot be moved in direct path mode, or if there is a situation where parallel SQL can be used to speed up the data move even more, then the external tables mode is used. The external table mechanism creates an external table that maps the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND hint is used on import to speed the copying of the data into the database.
Note: When the Export NETWORK_LINK parameter is used to specify a network link for an export operation, a variant of the external tables method is used. In this case, data is selected from across the specified network link and inserted into the dump file using an external table.
1.3. Data File Copying mode.
This mode is used when a transport tablespace job is started, i.e.: the TRANSPORT_TABLESPACES parameter is specified for an Export Data Pump job. This is the fastest method of moving data because the data is not interpreted nor altered during the job, and Export Data Pump is used to unload only structural information (metadata) into the dump file.
1.4. Network Link Import mode.
This mode is used when the NETWORK_LINK parameter is specified during an Import Data Pump job. This is the slowest of the four access methods because this method makes use of an INSERT SELECT statement to move the data over a database link, and reading over a network is generally slower than reading from a disk.
The "Data File Copying" and "Network Link Import" methods to move data in and out of databases are outside the scope of this article, and therefore not discussed any further.
For details about the access methods of the classic export client (exp), see:
Note:155477.1 "Parameter DIRECT: Conventional Path Export Versus Direct Path Export"
Export Data Pump will use the "Direct Path" mode to unload data in the following situations:
EXPDP will use DIRECT_PATH mode if:
2.1. The structure of a table allows a Direct Path unload, i.e.:
- The table does not have fine-grained access control enabled for SELECT.
- The table is not a queue table.
- The table does not contain one or more columns of type BFILE or opaque, or an object type containing opaque columns.
- The table does not contain encrypted columns.
- The table does not contain a column of an evolved type that needs upgrading.
- If the table has a column of datatype LONG or LONG RAW, then this column is the last column.
2.2. The parameters QUERY, SAMPLE, or REMAP_DATA parameter were not used for the specified table in the Export Data Pump job.
2.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1).
Note that with an unload of data in Direct Path mode, parallel I/O execuation Processes (PX processes) cannot be used to unload the data in parallel (paralllel unload is not supported in Direct Path mode).
Export Data Pump will use the "External Tables" mode to unload data in the following situations:
EXPDP will use EXTERNAL_TABLE mode if:
3.1. Data cannot be unloaded in Direct Path mode, because of the structure of the table, i.e.:
- Fine-grained access control for SELECT is enabled for the table.
- The table is a queue table.
- The table contains one or more columns of type BFILE or opaque, or an object type containing opaque columns.
- The table contains encrypted columns.
- The table contains a column of an evolved type that needs upgrading.
- The table contains a column of type LONG or LONG RAW that is not last.
3.2. Data could also have been unloaded in "Direct Path" mode, but the parameters QUERY, SAMPLE, or REMAP_DATA were used for the specified table in the Export Data Pump job.
3.3. Data could also have been unloaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the unload even more.
Note that with an unload of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to unload the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also:
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"
Import Data Pump will use the "Direct Path" mode to load data in the following situations:
IMPDP will use DIRECT_PATH if:
4.1. The structure of a table allows a Direct Path load, i.e.:
- A global index does not exist on a multipartition table during a single-partition load. This includes object tables that are partitioned.
- A domain index does not exist for a LOB column.
- The table is not in a cluster.
- The table does not have BFILE columns or columns of opaque types.
- The table does not have VARRAY columns with an embedded opaque type.
- The table does not have encrypted columns.
- Supplemental logging is not enabled and the table does not have a LOB column.
- The table into which data is being imported is a pre-existing table and:
– There is not an active trigger, and:
– The table is partitioned and has an index, and:
– Fine-grained access control for INSERT mode is not enabled, and:
– A constraint other than table check does not exist, and:
– A unique index does not exist.
4.2 The parameters QUERY, REMAP_DATA parameter were not used for the specified table in the Import Data Pump job.
4.3. The table or partition is relatively small (up to 250 Mb), or the table or partition is larger, but the job cannot run in parallel because the parameter PARALLEL was not specified (or was set to 1).
Import Data Pump will use the "External Tables" mode to load data in the following situations:
IMPDP will use EXTERNAL_TABLE if:
5.1. Data cannot be loaded in Direct Path mode, because at least one of the following conditions exists:
- A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
- A domain index exists for a LOB column.
- A table is in a cluster.
- A table has BFILE columns or columns of opaque types.
- A table has VARRAY columns with an embedded opaque type.
- The table has encrypted columns.
- Supplemental logging is enabled and the table has at least one LOB column.
- The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
– There is an active trigger
– The table is partitioned and does not have any indexes
– Fine-grained access control for INSERT mode is enabled for the table.
– An enabled constraint exists (other than table check constraints)
– A unique index exists
5.2. Data could also have been loaded in "Direct Path" mode, but the parameters QUERY, or REMAP_DATA were used for the specified table in the Import Data Pump job.
5.3. Data could also have been loaded in "Direct Path" mode, but the table or partition is relatively large (> 250 Mb) and parallel SQL can be used to speed up the load even more.
Note that with a load of data in External Tables mode, parallel I/O execuation Processes (PX processes) can be used to load the data in parallel. In that case the Data Pump Worker process acts as the coordinator for the PX processes. However, this does not apply when the table has a LOB column: in that case the table parallelism will always be 1. See also:
Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"
In very specific situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data. Example:
%expdp system/manager ... ACCESS_METHOD=DIRECT_PATH
%expdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE
or:
%impdp system/manager ... ACCESS_METHOD=DIRECT_PATH
%impdp system/manager ... ACCESS_METHOD=EXTERNAL_TABLE
Important Need-To-Know's when the parameter ACCESS_METHOD is specified for a job:
...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31696: unable to export/import TABLE_DATA:"SCOTT"."MY_TAB" using client specified DIRECT_PATH method
...
%expdp system/manager DIRECTORY=my_dir /
DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log /
TABLES=scott.my_tab TRACE=400300
The Worker trace file shows the method with which the data was loaded (or unloaded for Import Data Pump):
...
KUPW:14:57:14.289: 1: object: TABLE_DATA:"SCOTT"."MY_TAB"
KUPW:14:57:14.289: 1: TABLE_DATA:"SCOTT"."MY_TAB" external table, parallel: 1
...
For details, see also:
Note:286496.1 " Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump"
7.1. Bug 4722517 - Materialized view log not updated after import into existing table
- Defect: Bug:4722517 "MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE"
- Symptoms: a materialized view is created with FAST REFRESH on a master table; if data is imported into this master table, then these changes (inserts) do not show up in the materialized view log
- Releases: 10.1.0.2.0 and higher
- Fixed in: not applicable, closed as not-a-bug
- Patched files: not applicable
- Workaround: if possible import into a temporary holding table then copy the data with "insert as select" into the master table
- Cause: a fast refresh does not apply changes that result from bulk load operations on masters, such as an INSERT with the APPEND hint used by Import Data Pump
- Trace: not applicable, changes are not propagated
- Remarks: see also Note:340789.1 "Import Datapump (Direct Path) Does Not Update Materialized View Logs "
7.2. Bug 5599947 - Export Data Pump is slow when table has a LOB column
- Defect: Bug:5599947 "DATAPUMP EXPORT VERY SLOW"
- Symptoms: Export Data Pump has low performance when exporting table with LOB column
- Releases: 11.1.0.6 and below
- Fixed in: not applicable, closed as not feasible to fix
- Patched files: not applicable
- Workaround: if possible re-organize the large table with LOB column and make it partitioned
- Cause: if a table has a LOB column, and the unload or load takes place in "External Tables" mode, then we cannot make use of parallel I/O execution Processes (PX processes)
- Trace: not applicable
- Remarks: see also Bug:5943346 "PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN"
7.3. Bug 5941030 - Corrupt blocks after Import Data Pump when table has LONG / LONG RAW column
- Defect: Bug:5941030 "Datapump import can produce corrupt blocks when there is a LONG / LONG RAW"
- Symptoms: Direct Path import of a LONG / LONG RAW column can create corrupt blocks in the database. If DB_BLOCK_CHECKING is enabled then an ORA-600 [6917] error can be signalled. If not then the corrupt block can cause subsequent problems, like ORA-1498 (block check failure) on an analyze of the table.
- Releases: 11.1.0.6 and below
- Fixed in: 10.2.0.5.0 and 11.1.0.7.0 and higher; for some platforms a fix on top of 10.2.0.2.0 and on top of 10.2.0.3.0 is available with Patch:5941030
- Patched files: kdbl.o
- Workaround: if possible use the classic export and import clients to transfer this table
- Cause: internal issue with column count when loading table with LONG/LONG RAW column in Direct Path mode
- Trace: not applicable
- Remarks: see also Note:457128.1 "Logical Corruption Encountered After Importing Table With Long Column Using DataPump"
@ Open Enhancement Requests:
BUG:4722517 - MATERIALIZED VIEW LOG NOT UPDATED AFTER IMPORT DATAPUMP JOB INTO EXISTING TABLE
BUG:4727162 - PRODUCT ENHANCEMENT: ADD NEW DATAPUMP EXT TAB ACCESS METHOD WITHOUT APPEND HINT
BUG:5599947 - DATAPUMP EXPORT VERY SLOW
BUG:5941030 - DATAPUMP IMPORT CAN CORRUPT DATA WHEN THERE IS A LONG / LONG RAW
BUG:5943346 - PRODUCT ENHANCEMENT: PARALLELISM OF DATAPUMP JOB ON TABLE WITH LOB COLUMN
NOTE:155477.1 - Parameter DIRECT: Conventional Path Export Versus Direct Path Export
NOTE:286496.1 - Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump
NOTE:340789.1 - Import Datapump (Direct Path) Does Not Update Materialized View Logs
NOTE:365459.1 - Parallel Capabilities of Oracle Data Pump
NOTE:453895.1 - Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp)
NOTE:457128.1 - Logical Corruption Encountered After Importing Table With Long Column Using DataPump
NOTE:469439.1 - IMPDP Can Fail with ORA-31696 if ACCESS_METHOD=DIRECT_PATH Is Manually Specified
http://www.oracle.com/technology/pub/notes/technote_pathvsext.html
Related Products
Keywords
Errors
|
------------------------------------------------------------------------------
Blog: http://blog.csdn.net/tianlesoftware
网上资源: http://tianlesoftware.download.csdn.net
相关视频:http://blog.csdn.net/tianlesoftware/archive/2009/11/27/4886500.aspx
DBA1 群:62697716(满); DBA2 群:62697977(满)
DBA3 群:62697850 DBA 超级群:63306533;
聊天 群:40132017
--加群需要在备注说明Oracle表空间和数据文件的关系,否则拒绝申请