1.3.7 Beginning with Oracle Database10g Release 2, you can transport tablespaces that contain XMLTypes.Beginning with Oracle Database 11g Release 1, you must use only Data Pumpto export and import the tablespace metadata for tablespaces that containXMLTypes.
Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS Document 13001379.8 Bug 13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables
目标
从 Oracle 数据库 10g 开始,你可以跨平台的传输表空间。这篇文档提供了一个逐步指导,来解释如何实现 ASM 数据文件和 OS 文件系统数据文件的传输表空间。
SQL> COLUMN PLATFORM_NAME FORMAT A32
SQL> SELECT * FROM V$TRANSPORTABLE_PLATFORM;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- -------------------------------- --------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
10 Linux IA (32-bit) Little
6 AIX-Based Systems (64-bit) Big
3 HP-UX (64-bit) Big
5 HP Tru64 UNIX Little
4 HP-UX IA (64-bit) Big
11 Linux IA (64-bit) Little
15 HP Open VMS Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
13 Linux 64-bit for AMD Little
16 Apple Mac OS Big
12 Microsoft Windows 64-bit for AMD Little
17 Solaris Operating System (x86) Little
SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
如果你发现字节序是不同的,那么传输表空间集时必须进行转换:
RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
RMAN> convert tablespace TBS2 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
然后复制数据文件和导出的文件到目标环境。
导入可传输表空间
使用传统导入工具:
imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
使用数据泵:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
参考文档 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from different OS。
当使用被 drop 掉的列,可能遇到这个 Bug:13001379 - Datapump transport_tablespaces produces wrong dictionary metadata for some tables can occur。文档 1440203.1 给出了这个警告的细节。
使用 DBMS_FILE_TRANSFER 的已知问题
=> 未公开的 Bug 13636964 - ORA-19563 from RMAN convert on datafile copy transferred with DBMS_FILE_TRANSFER ( Doc ID 13636964.8)
确认受影响的版本
11.2.0.3
问题在如下版本修复
12.1.0.1 (Base Release)
11.2.0.4 (Future Patch Set)
描述
使用 DBMS_FILE_TRANSFER 转移的文件在 RMAN convert 操作中失败。
例如:
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of conversion at target command at 01/24/2012 16:22:23
ORA-19563: cross-platform datafile header validation failed for file +RECO/soets_9.tf
select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and
t.tablespace_name=p.tablespace_name and
x.owner=u.username;
SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
以目标平台的格式,从 ASM 文件生成一个 OS 文件。
RMAN> CONVERT TABLESPACE TBS1
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
RMAN> CONVERT TABLESPACE TBS2
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
拷贝生成的文件到目标服务器(如果跟源服务器不是同一台机器)。
导入可传输表空间。
使用原始的导入工具:
imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log transport_tablespace=y datafiles='/tmp/....','/tmp/...'
使用数据泵导入:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object IN VARCHAR2,
source_file_name IN VARCHAR2,
destination_directory_object IN VARCHAR2,
destination_file_name IN VARCHAR2,
destination_database IN VARCHAR2);
CREATE OR REPLACE DIRECTORY source_dir AS '+DGROUPS/subdir';
GRANT READ,WRITE ON DIRECTORY source_dir TO "USER";
CREATE OR REPLACE DIRECTORY source_dir_1 AS '+DGROUPS/subdir/subdir_2';
创建一个 dblink 连接到目标数据库主机:
CREATE DATABASE LINK DBS2 CONNECT TO 'user' IDENTIFIED BY 'password' USING 'target_connect';
-- - put a1.dat to a4.dat (using dbs2 dblink)
-- - level 2 sub dir to parent dir
-- - user has read privs on source_dir_1 at dbs1 and write on target_dir
-- - in dbs2
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE('source_dir_1', 'a1.dat',
'target_dir', 'a4.dat', 'dbs2' );
END;
[oracle@ct66rac01 ~]$ su - [root@ct66rac01 oracle]# service nfs status [root@ct66rac01 ~]# cat /etc/exports /home/oracle/xtts *(rw,sync,no_root_squash,insecure,anonuid=500,anongid=500) [root@ct66rac01 oracle]# service nfs start
3.##ct6604 ##在源端建立测试用的用户,表空间,表,权限. #此处的权限和表用于迁移之后的验证 [oracle@ct6604 ~]$ ORACLE_SID=ctdb [oracle@ct6604 ~]$ sqlplus / as sysdba SQL> create tablespace tbs01 datafile '/u02/oradata/ctdb/tbs01.dbf' size 10m autoextend on next 2m maxsize 4g; SQL> create tablespace tbs02 datafile '/u02/oradata/ctdb/tbs02.dbf' size 10m autoextend on next 2m maxsize 4g;
SQL> create user test01 identified by test01 default tablespace tbs01; SQL> create user test02 identified by test02 default tablespace tbs02; SQL> grant connect,resource to test01; SQL> grant connect,resource to test02; SQL> grant execute on dbms_crypto to test02;
SQL> create table test01.tb01 as select * from dba_objects; SQL> create table test02.tb01 as select * from dba_objects; SQL> grant select on test01.tb01 to test02; SQL> exit
4.##ct6604 ##在源端连接目标端的nfs,mount到/home/oracle/xtts下. [oracle@ct6604 ~]$ mkdir /home/oracle/xtts [oracle@ct6604 ~]$ su - [root@ct6604 ~]# showmount -e 192.108.56.101 Export list for 192.108.56.101: /home/oracle/xtts * [root@ct6604 ~]# mount -t nfs 192.108.56.101:/home/oracle/xtts /home/oracle/xtts
15.##ct66rac01 ##在目标端新建用户,导入传输表空间 [oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1 [oracle@ct66rac01 xtts]$ sqlplus / as sysdba SQL> create user test01 identified by test01 ; SQL> create user test02 identified by test02 ; SQL> grant connect,resource to test01; SQL> grant connect,resource to test02; SQL> exit
Import: Release 11.2.0.4.0 - Production on Fri Jan 15 17:18:14 2016
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: system Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** directory=dump_oradata nologfile=y network_link=lnk_ctdb transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles=+DATA/tbs01_5.xtf,+DATA/tbs02_6.xtf Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Jan 15 17:19:07 2016 elapsed 0 00:00:48
16.##ct66rac01 ##在目标端验证导入的数据和权限和源端是否一致 #此处发现源端给test02用户的execute on dbms_crypto权限没有导入,这是impdp原本的问题.所以在做xtts之前就要确定好这些权限的问题,以减少停机时间. [oracle@ct66rac01 xtts]$ sqlplus / as sysdba SQL> alter tablespace tbs01 read write; SQL> alter tablespace tbs02 read write; SQL> alter user test01 default tablespace tbs01; SQL> alter user test02 default tablespace tbs02;
SQL> select count(1) from test01.tb01; /* COUNT(1) 345732 */
SQL> select * from dba_tab_privs where grantee='TEST02'; /* GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRANTABLE HIERARCHY TEST02 TEST01 TB01 TEST01 SELECT NO NO */ #select * from dba_tab_privs where owner ='SYS' and grantee='TEST02'; SQL> grant execute on dbms_crypto to test02; SQL> exit
测试中的一些小问题: 1.报Cant find xttplan.txt, TMPDIR undefined at xttdriver.pl line 1185. 要注意设定环境变量TMPDIR=/home/oracle/xtts/script 2.Unable to fetch platform name 执行xttdriver.pl之前没有指定ORACLE_SID 3.Some failure occurred. Check /home/oracle/xtts/script/FAILED for more details If you have fixed the issue, please delete /home/oracle/xtts/script/FAILED and run it again OR run xttdriver.pl with -L option 执行xttdriver.pl报错后,下次执行要删除FAILED文件. 4.Can't locate strict.pm in @INC 使用$ORACLE_HOME/perl/bin/perl而不是使用perl
备注: 测试完成,比较简单吧.做好准备工作,通过在源端和目标端执行几次$ORACLE_HOME/perl/bin/perl xttdriver.pl,再执行impdp就完成.此测试中使用nfs可以省去文件的传输,使用整个操作方便清晰许多. 减少迁移停机时间的goldengate也是不错.另外整库迁移如果平台不同或相同,但字节顺序相同,可先考虑dataguard,Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (文档 ID 413484.1).
#############XTTS 2 11G
When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems that have different endian formats, the amount of downtime required can be substantial because it is directly proportional to the size of the data set being moved. However, combining XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime required to move data between platforms.
Traditional Cross Platform Transportable Tablespaces
The high-level steps in a typical XTTS scenario are the following:
Make tablespaces in source database READ ONLY
Transfer datafiles to destination system
Convert datafiles to destination system endian format
Export metadata of objects in the tablespaces from source database using Data Pump
Import metadata of objects in the tablespaces into destination database using Data Pump
Make tablespaces in destination database READ WRITE
Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively unavailable to users for the entire duration of the procedure. Due to the serial nature of the steps, the downtime required for this procedure is proportional to the amount of data. If data size is large, datafile transfer and convert times can be long, thus downtime can be long.
Reduce Downtime using Cross Platform Incremental Backup
To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental backups, to work in a cross-platform scenario. By using a series of incremental backups, each smaller than the last, the data at the destination system can be brought almost current with the source system, before any downtime is required. The downtime required for datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block changes in the source system.
The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as metadata export and import. Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross Platform Incremental Backup since migration time is typically dominated by metadata operations, not datafile transfer and conversion.
Only those database objects that are physically located in the tablespaces that are being transported will be copied to the destination system. If you need for other objects to be transported, that are located in different tablespaces (such as, for example, pl/sql objects, sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy those objects to the destination system.
The high-level steps using the cross platform incremental backup capability are the following:
1. Prepare phase (source data remains online)
Transfer datafiles to destination system
Convert datafiles, if necessary, to destination system endian format
2. Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)
Create incremental backup on source system
Transfer incremental backup to destination system
Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
NOTE: In Version 3, if a datafile is added to the tablespace OR a new tablespace name is added to the xtt.properties file, a warning and additional instructions will be required.
3. Transport phase (source data is READ ONLY)
Make tablespaces in source database READ ONLY
Repeat the Roll Forward phase one final time
This step makes destination datafile copies consistent with source database.
Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
Export metadata of objects in the tablespaces from source database using Data Pump Import metadata of objects in the tablespaces into destination database using Data Pump Make tablespaces in destination database READ WRITE
The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to reduce downtime when transporting tablespaces across platforms.
SCOPE
The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met.
If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data Guard. See Note 413484.1for details about heterogeneous platform support for Data Guard between your current little endian platform and Oracle Linux.
This method can also be used with 12c databases, however, for an alternative method for 12c see:
This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to an Oracle Exadata Database Machine running Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce downtime.
After performing the Initial Setup phase, moving the data is performed in the following three phases:
Prepare phase
During the Prepare phase, datafile copies of the tablespaces to be transported are transferred to the destination system and converted. The application being migrated is fully accessible during the Prepare phase. The Prepare phase can be performed using RMAN backups or dbms_file_transfer. Refer to the Selecting the Prepare Phase Method section for details about choosing the Prepare phase method.
Roll Forward phase
During the Roll Forward phase, the datafile copies that were converted during the Prepare phase are rolled forward using incremental backups taken from the source database. By performing this phase multiple times, each successive incremental backup becomes smaller and faster to apply, allowing the data at the destination system to be brought almost current with the source system. The application being migrated is fully accessible during the Roll Forward phase.
Transport phase
During the Transport phase, the tablespaces being transported are put into READ ONLY mode, and a final incremental backup is taken from the source database and applied to the datafile copies on the destination system, making the destination datafile copies consistent with source database. Once the datafiles are consistent, the tablespaces are TTS-exported from the source database and TTS-imported into the destination database. Finally, the tablespaces are made READ WRITE for full access on the destination database. The application being migrated cannot receive any updates during the Transport phase.
The Cross Platform Incremental Backup core functionality is delivered in Oracle Database 11.2.0.4 and later. See the Requirements and Recommendations section for details. In addition, a set of supporting scripts in the file rman-xttconvert_2.0.zip are attached to this document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup. The two primary supporting scripts files are the following:
Perl script xttdriver.pl - the script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup procedure.
Parameter file xtt.properties - the file that contains your site-specific configuration.
Requirements and Recommendations
This section contains the following subsections:
Prerequisites
Selecting the Prepare Phase Method
Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance
Prerequisites
The following prerequisites must be met before starting this procedure:
The limitations and considerations for transportable tablespaces must still be followed. They are defined in the following manuals:
Oracle Database Administrator's Guide
Oracle Database Utilities
In addition to the limitations and considerations for transportable tablespaces, the following conditions must be met:
The current version does NOT support Windows.
The source database must be running 10.2.0.3 or higher.
The source database must have its COMPATIBLE parameter set to 10.2.0 or higher.
The source database's COMPATIBLE parameter must not be greater than the destination database's COMPATIBLE parameter.
The source database must be in ARCHIVELOG mode.
The destination database must be running 11.2.0.4 or higher.
Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other unix based operating systems.
The Oracle version of source must be lower or equal to destination.
RMAN's default device type should be configured to DISK
RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED. If so, procedure may return: ORA-19994: cross-platform backup of compressed backups different endianess.
The set of tablespaces being moved must all be online, and contain no offline data files. Tablespaces must be READ WRITE. Tablespaces that are READ ONLY may be moved with the normal XTTS method. There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.
All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
If the Prepare Phase method selected is dbms_file_transfer, then the destination database must be 11.2.0.4. See the Selecting the Prepare Phase Method section for details.
If the Prepare Phase method selected is RMAN backup, then staging areas are required on both the source and destination systems. See the Selecting the Prepare Phase Method section for details.
It is not supported to execute this procedure against a standby or snapshot standby databases.
If the destination database version is 11.2.0.3 or lower, then a separate database home containing 11.2.0.4 running an 11.2.0.4 instance on the destination system is required to perform the incremental backup conversion. See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details. If using ASM for 11.2.0.4 Convert Home, then ASM needs to be on 11.2.0.4, else error ORA-15295 (e.g. ORA-15295: ASM instance software version 11.2.0.3.0 less than client version 11.2.0.4.0) is raised.
Whole Database Migration
If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.
This method can also be used with 12c databases, however, for an alternative method for 12c see:
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.
Selecting the Prepare Phase Method
During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script. There are two possible methods:
Using dbms_file_transfer (DFT) transfer (using xttdriver.pl -S and -G options)
Using Recovery Manager (RMAN) RMAN backup (using xttdriver.pl -p and -c options)
The dbms_file_transfer method uses the dbms_file_transfer.get_file() subprogram to transfer the datafiles from the source system to the target system over a database link. The dbms_file_transfer method has the following advantages over the RMAN method: 1) it does not require staging area space on either the source or destination system; 2) datafile conversion occurs automatically during transfer - there is not a separate conversion step. The dbms_file_transfer method requires the following:
A destination database running 11.2.0.4. Note that an incremental convert home or instance do not participate in dbms_file_transfer file transfers.
A database directory object in the source database from where the datafiles are copied.
A database directory object in the destination database to where the datafiles are placed.
A database link in the destination database referencing the source database.
The RMAN backup method runs RMAN on the source system to create backups on the source system of the datafiles to be transported. The backups files must then be manually transferred over the network to the destination system. On the destination system the datafiles are converted by RMAN, if necessary. The output of the RMAN conversion places the datafiles in their final location where they will be used by the destination database. In the original version of xttdriver.pl, this was the only method supported. The RMAN backup method requires the following:
Staging areas are required on both the source and destination systems for the datafile copies created by RMAN. The staging areas are referenced in the xtt.properties file using the parameters dfcopydir and stageondest. The final destination where converted datafiles are placed is referenced in the xtt.properties file using the parameter storageondest. Refer to the Description of Parameters in Configuration File xtt.properties section for details and sizing guidelines.
Details of using each of these methods are provided in the instructions below. The recommended method is the dbms_file_transfer method.
Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance
The Cross Platform Incremental Backup core functionality (i.e. incremental backup conversion) is delivered in Oracle Database 11.2.0.4 and later. If the destination database version is 11.2.0.4 or later, then the destination database can perform this function. However, if the destination database version is 11.2.0.3 or earlier, then, for the purposes of performing incremental backup conversion, a separate 11.2.0.4 software home, called the incremental convert home, must be installed, and an instance, called the incremental convert instance, must be started in NOMOUNT state using that home. The incremental convert home and incremental convert instance are temporary and are used only during the migration.
Note that because the dbms_file_transfer Prepare Phase method requires destination database 11.2.0.4, which can be used to perform the incremental backup conversions function (as stated above), an incremental convert home and incremental convert instance are usually only applicable when the Prepare Phase method is RMAN backup.
For details about setting up a temporary incremental convert instance, see instructions in Phase 1.
Troubleshooting
To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable XTTDEBUG=1 before running xttdriver.pl. Debug mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option.
Known Issues
If the source database contains nested IOTs with key compression, then the fix for Bug 14835322 must be installed in the destination database home (where the tablespace plug operation occurs).
If you wish to utilize block change tracking on the source database when incremental backups are created, then the fix for Bug 16850197 must be installed in the source database home.
If using ASM in both source and destination, see XTTS Creates Alias on Destination when Source and Destination use ASM (Note 2351123.1)
If the roll forward phase (xttdriver.pl -r) fails with the following errors, then verify RMAN DEVICE TYPE DISK is not configured COMPRESSED.
Entering RollForward After applySetDataFile Done: applyDataFileTo Done: RestoreSetPiece DECLARE * ERROR at line 1: ORA-19624: operation failed, retry possible ORA-19870: error while restoring backup piece /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup piece ORA-19837: invalid blocksize 0 in backup piece header ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338 ORA-06512: at line 40
This document can be referred as well for known issues : Note 17866999.8 & If the source contains cluster objects, then run "analyze cluster &cluster_name validate structure cascade" after XTTS has been completed in the target database and if it reports an ORA-1499 open the trace file and review if it has entries like:
kdcchk: index points to block 0x01c034f2 slot 0x1 chain length is 256 kdcchk: chain count wrong 0x01c034f2.1 chain is 1 index says 256 last entry 0x01c034f2.1 blockcount = 1 kdavls: kdcchk returns 3 when checking cluster dba 0x01c034a1 objn 90376
Then to repair this inconsistency either:
1. rebuild the cluster index. or 2. Install fix bug 17866999 and run dbms_repair.repair_cluster_index_keycount
If after repairing the inconsistency the "analyze cluster &cluster_name validate structure cascade" still reports issues then recreate the affected cluster which involves recreating its tables.
Note that the fix of bug 17866999 is a workaround fix to repair the index cluster; it will not avoid the problem.Oracle did not find a valid fix for this situation so it will affect any rdbms versions.
Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup
The XTTS with Cross Platform Incremental Backups procedure is divided into the following four phases:
Phase 1 - Initial Setup phase
Phase 2 - Prepare phase
Phase 3 - Roll Forward phase
Phase 4 - Transport phase
Conventions Used in This Document
All command examples use bash shell syntax.
Commands prefaced by the shell prompt string [oracle@source]$ indicate commands run as the oracle user on the source system.
Commands prefaced by the shell prompt string [oracle@dest]$ indicate commands run as the oracle user on the destination system.
Phase 1 - Initial Setup
Perform the following steps to configure the environment to use Cross Platform Incremental Backups:
Step 1.1 - Install the Destination Database Software and Create the Destination Database
Install the desired Oracle Database software on the destination system that will run the destination database. It is highly recommended to use Oracle Database 11.2.0.4 or later. Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.
Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport.
Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.
Step 1.2 - If necessary, Configure the Incremental Convert Home and Instance
See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details.
Skip this step if the destination database software version is 11.2.0.4 or later. Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.
If the destination database is 11.2.0.3 or earlier, then you must configure a separate incremental convert instance by performing the following steps:
Install a new 11.2.0.4 database home on the destination system. This is the incremental convert home.
Using the incremental convert home startup an instance in the NOMOUNT state. This is the incremental convert instance. A database does not need to be created for the incremental convert instance. Only a running instance is required.
The following steps may be used to create an incremental convert instance named xtt running out of incremental convert home /u01/app/oracle/product/11.2.0.4/xtt_home:
[oracle@dest]$ sqlplus / as sysdba
SQL> startup nomount
If ASM storage is used for the xtt.properties parameter backupondest (described below), then the COMPATIBLE initialization parameter setting for this instance must be equal to or higher than the rdbms.compatible setting for the ASM disk group used.
Step 1.3 - Identify Tablespaces to be Transported
Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this document. As indicated above, the limitations and considerations for transportable tablespaces must still be followed.
Step 1.4 - If Using dbms_file_transfer Prepare Phase Method, then Configure Directory Objects and Database Links
Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.
If using dbms_file_transfer as the Prepare Phase method, then three database objects must be created:
A database directory object in the source database from where the datafiles are copied
A database directory object in the destination database to where the datafiles are placed
A database link in the destination database referencing the source database
The source database directory object references the location where the datafiles in the source database currently reside. For example, to create directory object sourcedir that references datafiles in ASM location +DATA/prod/datafile, connect to the source database and run the following SQL command:
SQL@source> create directory sourcedir as '+DATA/prod/datafile';
The destination database directory object references the location where the datafiles will be placed on the destination system. This should be the final location where the datafils will reside when in use by the destination database. For example, to create directory object dstdir that will place transferred datafiles in ASM location +DATA/prod/datafile, connect to the destination database and run the following SQL command:
SQL@dest> create directory destdir as '+DATA/prod/datafile';
The database link is created in the destination database, referencing the source database. For example, to create a database link named ttslink, run the following SQL command:
SQL@dest> create public database link ttslink connect to system identified by using '';
Verify the database link can properly access the source system:
SQL@dest> select * from dual@ttslink;
Step 1.5 - Create Staging Areas
Create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: backupformat, backupondest.
Also, if using RMAN backups in the Prepare phase, create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: dfcopydir, stageondest.
Step 1.6 - Install xttconvert Scripts on the Source System
On the source system, as the oracle software owner, download and extract the supporting scripts attached as rman-xttconvert_2.0.zip to this document.
Step 1.7 - Configure xtt.properties on the Source System
Edit the xtt.properties file on the source system with your site-specific configuration. For more information about the parameters in the xtt.properties file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below.
Step 1.8 - Copy xttconvert Scripts and xtt.properties to the Destination System
As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system.
In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the supporting scripts exist. Use this shell to run the Perl script xttdriver.pl as shown in the steps below. If TMPDIR is not set, output files are created in and input files are expected to be in /tmp.
[oracle@source]$ export TMPDIR=/home/oracle/xtt
[oracle@dest]$ export TMPDIR=/home/oracle/xtt
Phase 2 - Prepare Phase
During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script. There are two possible methods:
Phase 2A - dbms_file_transfer Method
Phase 2B - RMAN Backup Method
Select and use one of these methods based upon the information provided in the Requirements and Recommendations section above.
NOTE: For large number of files, using dbms_file_transfer has been found to be the fastest method for transferring datafiles to destination.
Phase 2A - Prepare Phase for dbms_file_transfer Method
Only use the steps in Phase 2A if the Prepare Phase method chosen is dbms_file_transfer and the setup instructions have been completed, particularly those in Step 1.4.
During this phase datafiles of the tablespaces to be transported are transferred directly from source system and placed on the destination system in their final location to be used by the destination database. If conversion is required, it is performed automatically during transfer. No separate conversion step is required. The steps in this phase are run only once. The data being transported is fully accessible in the source database during this phase.
Step 2A.1 - Run the Prepare Step on the Source System
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:
The prepare step performs the following actions on the source system:
Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
Creates the following files used later in this procedure:
xttnewdatafiles.txt
getfile.sql
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE. The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY. If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process. No incremental apply is needed for those files.
Step 2A.2 - Transfer the Datafiles to the Destination System
On the destination system, log in as the oracle user and set the environment (ORACLE_HOME and ORACLE_SID environment variables) to the destination database (it is invalid to attempt to use an incremental convert instance). Copy the xttnewdatafiles.txt and getfile.sql files created in step 2A.1 from the source system and run the -G get_file step as follows:
NOTE: This step copies all datafiles being transported from the source system to the destination system. The length of time for this step to complete is dependent on datafile size, and may be substantial. Use getfileparallel option for parallelism.
# MUST set environment to destination database [oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
When this step is complete, the datafiles being transported will reside in the final location where they will be used by the destination database. Note that endian conversion, if required, is performed automatically during this step.
Proceed to Phase 3 to create and apply incremental backups to the datafiles.
Phase 2B - Prepare Phase for RMAN Backup Method
Only use the steps in Phase 2B if the Prepare Phase method chosen is RMAN backup and the setup instructions have been completed, particularly those in Step 1.5.
During this phase datafile copies of the tablespaces to be transported are created on the source system, transferred to the destination system, converted, and placed in their final location to be used by the destination database. The steps in this phase are run only once. The data being transported is fully accessible in the source database during this phase.
Step 2B.1 - Run the Prepare Step on the Source System
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:
The prepare step performs the following actions on the source system:
Creates datafile copies of the tablespaces that will be transported in the location specified by the xtt.properties parameter dfcopydir.
Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
Creates the following files used later in this procedure:
xttplan.txt
rmanconvert.cmd
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE. The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY. If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process. No incremental apply is needed for those files.
Step 2B.2 - Transfer Datafile Copies to the Destination System
On the destination system, logged in as the oracle user, transfer the datafile copies created in the previous step from the source system. Datafile copies on the source system are created in the location defined in xtt.properties parameter dfcopydir. The datafile copies must be placed in the location defined by xtt.properties parameter stageondest.
Any method of transferring the datafile copies from the source system to the destination system that results in a bit-for-bit copy is supported.
If the dfcopydir location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the datafile copies are already available in the expected location on the destination system.
In the example below, scpis used to transfer the copies created by the previous step from the source system to the destination system.
Note that due to current limitations with cross-endian support in DBMS_FILE_TRANSPORT and ASMCMD, you must use OS-level commands, such as SCP or FTP to transfer the copies from the source system to destination system.
Step 2B.3 - Convert the Datafile Copies on the Destination System
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:
The convert datafiles step converts the datafiles copies in the stageondest location to the endian format of the destination system. The converted datafile copies are written in the location specified by the xtt.properties parameter storageondest. This is the final location where datafiles will be accessed when they are used by the destination database.
When this step is complete, the datafile copies in stageondest location are no longer needed and may be removed.
Phase 3 - Roll Forward Phase
During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward. This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database. The data being transported is fully accessible during this phase.
Step 3.1 - Create an Incremental Backup of the Tablespaces being Transported on the Source System
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:
The create incremental step executes RMAN commands to generate incremental backups for all tablespaces listed in xtt.properties. It creates the following files used later in this procedure:
tsbkupmap.txt
incrbackups.txt
Step 3.2 - Transfer Incremental Backup to the Destination System
Transfer the incremental backup(s) created during the previous step to the stageondest location on the destination system. The list of incremental backup files to copy are found in the incrbackups.txt file on the source system.
If the backupformat location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the incremental backups are already available in the expected location on the destination system.
Step 3.3 - Convert the Incremental Backup and Apply to the Datafile Copies on the Destination System
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the xttplan.txt and tsbkupmap.txt files from the source system and run the rollforward datafiles step as follows:
The rollforward datafiles step connects to the incremental convert instance as SYS, converts the incremental backups, then connects to the destination database and applies the incremental backups for each tablespace being transported.
Note:
1. You must copy the xttplan.txt and tsbkupmap.txt files each time that this step is executed, because their content is different each iteration. 2. Do NOT change, copy or make any changes to the xttplan.txt.new generated by the script. 3. The destination instance will be shutdown and restarted by this process.
Step 3.4 - Determine the FROM_SCN for the Next Incremental Backup
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the sourcedatabase, run the determine new FROM_SCN step as follows:
The determine new FROM_SCN step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1.
Step 3.5 - Repeat the Roll Forward Phase (Phase 3) or Move to the Transport Phase (Phase 4)
At this point there are two choices:
If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.
NOTE: If a datafile is added to one a tablespace since last incremental backup and/or a new tablespace name is added to the xtt.properties, the following will appear:
Error: ------ The incremental backup was not taken as a datafile has been added to the tablespace:
Please Do the following: -------------------------- 1. Copy fixnewdf.txt from source to destination temp dir
2. Copy backups: from to the in destination
3. On Destination, run $ORACLE_HOME/perl/bin/perl xttdriver.pl --fixnewdf
4. Re-execute the incremental backup in source: $ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpincr
NOTE: Before running incremental backup, delete FAILED in source temp dir or run xttdriver.pl with -L option:
These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.
Phase 4 - Transport Phase
During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database. The data being transported is accessible only in READ ONLY mode until the end of this phase.
Step 4.1 - Make Source Tablespaces READ ONLY in the Source Database
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the sourcedatabase, make the tablespaces being transported READ ONLY.
system@source/prod SQL> alter tablespace TS1 read only;
Tablespace altered.
system@source/prod SQL> alter tablespace TS2 read only;
Tablespace altered.
Step 4.2 - Create the Final Incremental Backup, Transfer, Convert, and Apply It to the Destination Datafiles
Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles.
Step 4.3 - Import Object Metadata into Destination Database
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set. Note that network_link mode initiates an import over a database link that refers to the source database. A separate export or dump file is not required. If you choose to perform the tablespace transport with this command, then you must edit the import command to replace import parameters DIRECTORY, LOGFILE, and NETWORK_LINK with site-specific values.
The following is an example network mode transportable import command:
After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.
Database users that own objects being transported must exist in the destination database before performing the transportable import.
If you do not use network_link import, then perform the tablespace transport by running transportable mode Data Pump Export on the source database to export the object metadata being transported into a dump file, then transfer the dump file to the destination system, then run transportable mode Data Pump Import to import the object metadata into the destination database. Refer to the following manuals for details:
Oracle Database Administrator's Guide
Oracle Database Utilities
Step 4.4 - Make the Tablespace(s) READ WRITE in the Destination Database
The final step is to make the destination tablespace(s) READ WRITE in the destination database.
system@dest/prod SQL> alter tablespace TS1 read write;
Tablespace altered.
system@dest/prod SQL> alter tablespace TS2 read write;
Tablespace altered.
Step 4.5 - Validate the Transported Data
At this step, the transported data is READ ONLY in the destination database. Perform application specific validation to verify the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
RMAN> validate tablespace TS1, TS2 check logical;
Phase 5 - Cleanup
If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and the software removed.
Files created by this process are no longer required and may now be removed. They include the following:
dfcopydir location on the source system
backupformat location on the source system
stageondest location on the destination system
backupondest location on the destination system
$TMPDIR location in both destination and source systems
Appendix
Description of Perl Script xttdriver.pl Options
The following table describes the options available for the main supporting script xttdriver.pl.
Option
Description
-S prepare source for transfer
-S option is used only when Prepare phase method is dbms_file_transfer.
Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database. This step creates files xttnewdatafiles.txt and getfile.sql.
-G get datafiles from source
-G option is used only when Prepare phase method is dbms_file_transfer.
Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database. The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.
This option connects to the destination database and runs script getfile.sql. getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).
-p prepare source for backup
-p option is used only when Prepare phase method is RMAN backup.
Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties. xttpreparesrc.sql does the following:
Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.
Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties. These datafile copies must be transferred manually to the destination system.
Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.
-c convert datafiles
-c option is used only when Prepare phase method is RMAN backup.
Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format. Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.
-i create incremental
Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.
-r rollforward datafiles
Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.
-s determine new FROM_SCN
Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database. This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS command
Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.
-d debug
-d option enables debug mode for xttdriver.pl and RMAN commands it executes. Debug mode can also be enabled by setting environment variable XTTDEBUG=1.
Description of Parameters in Configuration File xtt.properties
The following table describes the parameters defined in the xtt.properties file that is used by xttdriver.pl.
Parameter
Description
Example Setting
tablespaces
Comma-separated list of tablespaces to transport from source database to destination database. Must be a single line, any subsequent lines will not be read.
tablespaces=TS1,TS2
platformid
Source database platform id, obtained from V$DATABASE.PLATFORM_ID.
platformid=2
srcdir
Directory object in the source database that defines where the source datafiles currently reside. Multiple locations can be used separated by ",". The srcdir to dstdir mapping can either be N:1 or N:N. i.e. there can be multiple source directories and the files will be written to a single destination directory, or files from a particular source directory can be written to a particular destination directory.
This parameter is used only when Prepare phase method is dbms_file_transfer.
srcdir=SOURCEDIR
srcdir=SRC1,SRC2
dstdir
Directory object in the destination database that defines where the destination datafiles will be created. If multiple source directories are used (srcdir), then multiple destinations can be defined so a particular source directory is written to a particular destination directory.
This parameter is used only when Prepare phase method is dbms_file_transfer.
dstdir=DESTDIR
dstdir=DST1,DST2
srclink
Database link in the destination database that refers to the source database. Datafiles will be transferred over this database link using dbms_file_transfer.
This parameter is used only when Prepare phase method is dbms_file_transfer.
srclink=TTSLINK
dfcopydir
Location on the source system where datafile copies are created during the "-p prepare" step.
This location must have sufficient free space to hold copies of all datafiles being transported.
This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system. See Note 359515.1 for mount option guidelines.
This parameter is used only when Prepare phase method is RMAN backup.
dfcopydir=/stage_source
backupformat
Location on the source system where incremental backups are created.
This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.
This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.
backupformat=/stage_source
stageondest
Location on the destination system where datafile copies are placed by the user when they are transferred manually from the source system.
This location must have sufficient free space to hold copies of all datafiles being transported.
This is also the location from where datafiles copies and incremental backups are read when they are converted in the "-c conversion of datafiles" and "-r roll forward datafiles" steps.
This location may be a DBFS-mounted filesystem.
This location may be an NFS-mounted filesystem that is shared with the source system, in which case it should reference the same NFS location as the dfcopydir and backupformat parameters for the source system. See Note 359515.1 for mount option guidelines.
stageondest=/stage_dest
storageondest
Location on the destination system where the converted datafile copies will be written during the "-c conversion of datafiles" step.
This location must have sufficient free space to permanently hold the datafiles that are transported.
This is the final location of the datafiles where they will be used by the destination database.
This parameter is used only when Prepare phase method is RMAN backup.
storageondest=+DATA - or - storageondest=/oradata/prod/%U
backupondest
Location on the destination system where converted incremental backups on the destination system will be written during the "-r roll forward datafiles" step.
This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.
NOTE: If this is set to an ASM location then define properties asm_home and asm_sid below. If this is set to a file system location, then comment out asm_home and asm_sid parameters below.
backupondest=+RECO
cnvinst_home
Only set this parameter if a separate incremental convert home is in use.
ORACLE_HOME of the incremental convert instance that runs on the destination system.
Only set this parameter if a separate incremental convert home is in use.
ORACLE_SID of the incremental convert instance that runs on the destination system.
cnvinst_sid=xtt
asm_home
ORACLE_HOME for the ASM instance that runs on the destination system.
NOTE: If backupondest is set to a file system location, then comment out both asm_home and asm_sid.
asm_home=/u01/app/11.2.0.4/grid
asm_sid
ORACLE_SID for the ASM instance that runs on the destination system.
asm_sid=+ASM1
parallel
Defines the degree of parallelism set in the RMAN CONVERT command file rmanconvert.cmd. This file is created during the prepare step and used by RMAN in the convert datafiles step to convert the datafile copies on the destination system. If this parameter is unset, xttdriver.pl uses parallel=8.
NOTE: RMAN parallelism used for the datafile copies created in the RMAN Backup prepare phase and the incremental backup created in the rollforward phase is controlled by the RMAN configuration on the source system. It is not controlled by this parameter.
parallel=3
rollparallel
Defines the level of parallelism for the -r roll forward operation.
rollparallel=2
getfileparallel
Defines the level of parallelism for the -G operation
Default value is 1. Maximum supported value is 8.
getfileparallel=4
Known Issue
Known Issues for Cross Platform Transportable Tablespaces XTTS Document 2311677.1
Change History
Change
Date
rman_xttconvert_v3.zip released - adds support for added datafiles
2017-Jun-06
rman-xttconvert_2.0.zip released - add support for multiple source and destination directories
2015-Apr-20
rman-xttconvert_1.4.2.zip released - add parallelism support for -G get file from source operation
2014-Nov-14
rman-xttconvert_1.4.zip released - remove staging area requirement, add parallel rollforward, eliminate conversion instance requirements when using 11.2.0.4.
2014-Feb-21
rman-xttconvert_1.3.zip released - improves handling of large databases with large number of datafiles.
如果前滚阶段(xttdriver.pl -r)失败了并显示下面的错误,需要检查 RMAN DEVICE TYPE DISK 是否被配置成了 COMPRESSED:
Entering RollForward After applySetDataFile Done: applyDataFileTo Done: RestoreSetPiecehttps://mosemp.us.oracle.com/epmos/faces/secure/awiz/AwizHome.jspx?_afrLoop=345416299618939&docid=2102859.1&_afrWindowMode=0&_adf.ctrl-state=14b3jfishi_464 DECLARE * ERROR at line 1: ORA-19624: operation failed, retry possible ORA-19870: error while restoring backup piece /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup piece ORA-19837: invalid blocksize 0 in backup piece header ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338 ORA-06512: at line 40
SQL> alter tablespace test1 read only; SQL> alter system archive log current;
注意:确保待机已收到redo。如果数据文件不同,则表空间插件将返回如下错误: ORA-39123: Data Pump transportable tablespace job aborted ORA-19722: datafile /u01/oradata/convert/TEST1_5.xtf is an incorrect version
b.步骤4.2,创建最终增量备份,传输,转换并应用到目标数据文件
<与此note相同的步骤>
c。步骤4.3,在目标数据库中创建一个连接到PRIMARY的数据库链接
SQL>创建公共数据库链接primarylink连接到由管理器标识的系统使用'<连接字符串>';
测试链接:
SQL>select db_name, database_role from v$database@primarylink;
NOTE: For large number of files, using dbms_file_transfer (see phase 2 in Note 1389592.1 has been found to be the fastest method for transferring datafiles to destination. This method outlined in the following article also applies to 12c databases:
11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1).
We are all aware of the issue surrounding cross platform database migrations. The endian difference between RISC Unix platforms (big endian) versus x86 platforms (little endian) causes a conversion of the data before it is usable by Oracle. There are three methodologies possible when converting the endian-ness of a database.
Traditional export using exp/imp or Data Pump
Cross platform transportable tablespaces (XTTS)
Logical replication with Streams or Golden Gate
Each method has its roadblocks. Exporting the data is fairly simple, but requires a lot of downtime for large databases. XTTS too requires a lot of downtime for large databases, although usually less than exporting and importing. Logical replication offers less downtime, but Golden Gate is a separately licensed product that comes with a hefty price tag.
As of 11.2.0.4 there is an additional method, which is based on traditional XTTS. This is XTTS along with cross platform incremental backups.
TRADITIONAL XTTS
The steps involved with traditional XTTS are as follows:
Make the source datafiles read only (downtime begins)
Transfer datafiles to the destination
Convert datafiles to new endian format
Export metadata from the source
Import metadata on the destination
Make tablespaces on destination read/write
The problem is that the source datafiles must be made read only before the copy to the target system. This copy can take a very long time for a large, multi-terabyte database.
XTTS WITH CROSS PLATFORM INCREMENTAL BACKUP
The main difference with this procedure is that the initial copy of the datafiles occurs while the source database remains online. Then, incremental backups are taken of the source database, transferred to the destination, converted to the new endian-ness, and applied to the destination. Here are the steps.
Transfer source datafiles to the destination (source database remains online)
Convert datafiles to new endian format
Create an incremental backup of the source tablespaces
Transfer the incremental backup to the destination
Convert the incremental backup to new endian format
Apply the incremental backup to the destination database
Repeat the incremental backup steps as needed
Place the source database into read only mode
Repeat the incremental backup steps
Export metadata from the source
Import metadata on the destination
Make tablespaces on destination read/write
Cross Platform Incremental Caveat The one caveat to this process is that the functionality of converting an incremental backup is new in 11.2.0.4. This means that an 11.2.0.4 Oracle home must exist on the destination system to perform the conversion of the incremental backup. That does not mean that the destination database must be 11.2.0.4. The 11.2.0.4 home can be used for the conversion even if the destination database is a lower version.
CONCLUSION
In this post I described, in more detail, one of the three methodologies possible when converting the endian-ness of a database – cross platform transportable tablespaces (XTTS). Hopefully you now have a better understanding of the pros and cons of this method and whether or not it’s a good fit for your Oracle environment.
OVERVIEW
In part one of this post, we described the high level concept of using Oracle’s new cross platform incremental backup along with transportable tablespaces. These tools allow a DBA to perform cross platform transportable tablespace operations with the source database online, and then later to apply one or more incremental backups of the source database to roll the destination database forward. This can substantially reduce the downtime of a cross platform transportable tablespace operation.
In part two of this post, we will outline the specific steps required to perform this migration using the new cross platform incremental backup functionality.
COMPLETE MIGRATION STEPS
The following are the high level steps necessary to complete a cross platform transportable tablespace migration with cross platform incremental backup using the Oracle scripts outlined in MOS note 1389592.1 (MOS note 2005729.1 for 12c):
Install 11.2.0.4 Oracle home, if not already installed
Initial configuration of the Oracle perl scripts
Turn on block change tracking, if not already configured
Transfer and convert datafiles from source to target
Perform incremental backup and apply to target
Repeat incremental backup and apply
Put tablespaces into read only mode on the source
Perform final incremental backup and apply to target
Perform TTS Data Pump import on the target
Perform metadata only export on the source
Perform metadata only import on the target
Audit objects between source and target to ensure everything came over
Set the tablespaces to read-write on the target
Configure Oracle Perl Scripts The first step in using this methodology is to download the zip file containing the scripts from MOS note 1389592.1. The current version of the scripts is located in file rman_xttconvert_2.0.zip. Unzip the file on both the source and target systems and then configure the xtt.properties file on both nodes. This is the parameter file that controls the operations. The comments in the file describe how to modify the entries.
Transfer and Convert the Datafiles Run the Oracle supplied perl script to copy the datafiles to the target system.
5. Run the metadata only Data Pump import on the target.
nohup impdp \”/ as sysdba\” parfile=migrate_meta_imp.par > migrate_meta_imp.log 2>&1 &
## migrate_meta_imp.par
DIRECTORY = MIG_DIR
DUMPFILE = MIGRATE_META.dmp
LOGFILE = MIGRATE_META_IMP.log
FULL = Y
PARALLEL = 8
JOB_NAME = MIGRATE_META_IMP
6. Reconcile the source and target databases to ensure that all objects came over successfully.
set lines 132 pages 500 trimspool on echo off verify off feedback off
col object_name format a30
select owner, object_type, object_name, status from dba_objects where owner not in (‘SYS’, ‘SYSTEM’, ‘TOAD’, ‘SCOTT’, ‘OUTLN’, ‘MSDB1’, ‘DBSNMP’, ‘PUBLIC’, ‘XDB’, ‘WMSYS’, ‘WKSYS’, ‘ORDSYS’, ‘OLAPSYS’, ‘ORDPLUGINS’, ‘ODM’, ‘ODM_MTR’, ‘MDSYS’, ‘CTXSYS’) order by 1,2,3 minus select owner, object_type, object_name, status from dba_objects@ttslink where owner not in (‘SYS’, ‘SYSTEM’, ‘TOAD’, ‘SCOTT’, ‘OUTLN’, ‘MSDB1’, ‘DBSNMP’, ‘PUBLIC’, ‘XDB’, ‘WMSYS’, ‘WKSYS’, ‘ORDSYS’, ‘OLAPSYS’, ‘ORDPLUGINS’, ‘ODM’, ‘ODM_MTR’, ‘MDSYS’, ‘CTXSYS’) order by 1,2,3;
set echo on verify on feedback on
7. Set the tablespaces to read write on the target.
alter tablespace APP_DATA read write;
alter tablespace APP_IDX read write;
alter tablespace APP_DATA2 read write;
…
CONCLUSION
In this blog, we have demonstrated the steps for using cross platform incremental backup to reduce downtime for large dataset platform migrations without the need for additional licensed products.
[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Mon Aug 14 08:39:17 BEIST 2017 on /dev/pts/0 from 10.138.130.242
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,37,50)
150 Opening data connection for /bin/ls.
total 424
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties.jy
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip
-rw-r--r-- 1 oracle11 oinstall 352 Aug 18 10:15 xtt.properties
226 Transfer complete.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttcnvrtbkupdest.sql
local: xttcnvrtbkupdest.sql remote: xttcnvrtbkupdest.sql
227 Entering Passive Mode (10,138,129,2,37,63)
150 Opening data connection for xttcnvrtbkupdest.sql (1390 bytes).
226 Transfer complete.
1390 bytes received in 4.8e-05 seconds (2.8e+04 Kbytes/s)
ftp> get xttstartupnomount.sql
local: xttstartupnomount.sql remote: xttstartupnomount.sql
227 Entering Passive Mode (10,138,129,2,37,66)
150 Opening data connection for xttstartupnomount.sql (52 bytes).
226 Transfer complete.
52 bytes received in 3.7e-05 seconds (1.4e+03 Kbytes/s)
ftp> get xttprep.tmpl
local: xttprep.tmpl remote: xttprep.tmpl
227 Entering Passive Mode (10,138,129,2,37,69)
150 Opening data connection for xttprep.tmpl (11710 bytes).
226 Transfer complete.
11710 bytes received in 0.00065 seconds (1.7e+04 Kbytes/s)
ftp> get xttdriver.pl
local: xttdriver.pl remote: xttdriver.pl
227 Entering Passive Mode (10,138,129,2,37,72)
150 Opening data connection for xttdriver.pl (139331 bytes).
226 Transfer complete.
139331 bytes received in 0.0026 seconds (5.3e+04 Kbytes/s)
ftp> get xttdbopen.sql
local: xttdbopen.sql remote: xttdbopen.sql
227 Entering Passive Mode (10,138,129,2,37,77)
150 Opening data connection for xttdbopen.sql (71 bytes).
226 Transfer complete.
71 bytes received in 3.9e-05 seconds (1.8e+03 Kbytes/s)
ftp> get xtt.properties
local: xtt.properties remote: xtt.properties
227 Entering Passive Mode (10,138,129,2,37,84)
150 Opening data connection for xtt.properties (352 bytes).
226 Transfer complete.
352 bytes received in 4.2e-05 seconds (8.2e+03 Kbytes/s)
[oracle@jyrac1 xtts_script]$ ls -lrt
total 172
-rw-r--r-- 1 oracle oinstall 1390 Aug 18 10:38 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall 52 Aug 18 10:38 xttstartupnomount.sql
-rw-r--r-- 1 oracle oinstall 11710 Aug 18 10:38 xttprep.tmpl
-rw-r--r-- 1 oracle oinstall 139331 Aug 18 10:38 xttdriver.pl
-rw-r--r-- 1 oracle oinstall 71 Aug 18 10:38 xttdbopen.sql
-rw-r--r-- 1 oracle oinstall 352 Aug 18 10:38 xtt.properties
IBMP740-2:/oracle11/xtts_script$export ORACLE_HOME=/oracle11/app/oracle/product/11.2.0/db
IBMP740-2:/oracle11/xtts_script$export ORACLE_SID=jycs
IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -S
============================================================
trace file is /oracle11/xtts_script/setupgetfile_Aug18_Fri_10_21_17_169//Aug18_Fri_10_21_17_169_.log
=============================================================
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Starting prepare phase
--------------------------------------------------------------------
Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 10:21:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:21:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:21:18 2017
--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------
[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 10:16:01 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/xtts_script
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,38,79)
150 Opening data connection for /bin/ls.
total 456
-rw-r--r-- 1 oracle11 oinstall 1390 May 24 16:57 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle11 oinstall 52 May 24 16:57 xttstartupnomount.sql
-rw-r--r-- 1 oracle11 oinstall 11710 May 24 16:57 xttprep.tmpl
-rw-r--r-- 1 oracle11 oinstall 139331 May 24 16:57 xttdriver.pl
-rw-r--r-- 1 oracle11 oinstall 71 May 24 16:57 xttdbopen.sql
-rw-r--r-- 1 oracle11 oinstall 7969 Jun 05 08:47 xtt.properties.jy
-rw-r----- 1 oracle11 oinstall 33949 Aug 18 09:26 rman_xttconvert_v3.zip
-rw-r--r-- 1 oracle11 oinstall 352 Aug 18 10:15 xtt.properties
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttplan.txt
-rw-r--r-- 1 oracle11 oinstall 106 Aug 18 10:21 xttnewdatafiles.txt_temp
-rw-r--r-- 1 oracle11 oinstall 50 Aug 18 10:21 xttnewdatafiles.txt
drwxr-xr-x 2 oracle11 oinstall 256 Aug 18 10:21 setupgetfile_Aug18_Fri_10_21_17_169
-rw-r--r-- 1 oracle11 oinstall 68 Aug 18 10:21 getfile.sql
226 Transfer complete.
ftp> lcd /u01/xtts_script
Local directory now /u01/xtts_script
ftp> bin
200 Type set to I.
ftp> get xttnewdatafiles.txt
local: xttnewdatafiles.txt remote: xttnewdatafiles.txt
227 Entering Passive Mode (10,138,129,2,38,112)
150 Opening data connection for xttnewdatafiles.txt (50 bytes).
226 Transfer complete.
50 bytes received in 6.2e-05 seconds (7.9e+02 Kbytes/s)
ftp> get getfile.sql
local: getfile.sql remote: getfile.sql
227 Entering Passive Mode (10,138,129,2,38,115)
150 Opening data connection for getfile.sql (68 bytes).
226 Transfer complete.
68 bytes received in 4.9e-05 seconds (1.4e+03 Kbytes/s)
# MUST set environment to destination database
[oracle@jyrac1 xtts_script]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db
[oracle@jyrac1 xtts_script]$ export ORACLE_SID=jyrac1
[oracle@jyrac1 xtts_script]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
============================================================
trace file is /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//Aug18_Fri_11_03_48_564_.log
=============================================================
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Getting datafiles from source
--------------------------------------------------------------------
--------------------------------------------------------------------
Executing getfile for /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//getfile_sourcedir_cdzj01_0.sql
--------------------------------------------------------------------
--------------------------------------------------------------------
Executing getfile for /u01/xtts_script/getfile_Aug18_Fri_11_03_48_564//getfile_sourcedir_ldjc01_1.sql
--------------------------------------------------------------------
--------------------------------------------------------------------
Completed getting datafiles from source
--------------------------------------------------------------------
ASMCMD [+datadg/jyrac/datafile] > ls -lt
Type Redund Striped Time Sys Name
N ldjc01 => +DATADG/JYRAC/DATAFILE/FILE_TRANSFER.271.952340629
N cdzj01 => +DATADG/JYRAC/DATAFILE/FILE_TRANSFER.272.952340629
DATAFILE MIRROR COARSE AUG 18 11:00:00 Y FILE_TRANSFER.272.952340629
DATAFILE MIRROR COARSE AUG 18 11:00:00 Y FILE_TRANSFER.271.952340629
IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_10_56_44_606//Aug18_Fri_10_56_44_606_.log
=============================================================
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 10:56:44 2017
xttpreparesrc.sql for ended at Fri Aug 18 10:56:44 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: ''''''''''''
--------------------------------------------------------------------
Starting incremental backup
--------------------------------------------------------------------
--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------
[oracle@jyrac1 xtts]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 10:24:32 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,43,121)
150 Opening data connection for /bin/ls.
total 624
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> bin
200 Type set to I.
ftp> get 06sc73nf_1_1
local: 06sc73nf_1_1 remote: 06sc73nf_1_1
227 Entering Passive Mode (10,138,129,2,43,130)
150 Opening data connection for 06sc73nf_1_1 (65536 bytes).
226 Transfer complete.
65536 bytes received in 0.0018 seconds (3.5e+04 Kbytes/s)
ftp> get 07sc73ng_1_1
local: 07sc73ng_1_1 remote: 07sc73ng_1_1
227 Entering Passive Mode (10,138,129,2,43,134)
150 Opening data connection for 07sc73ng_1_1 (253952 bytes).
226 Transfer complete.
253952 bytes received in 0.0038 seconds (6.5e+04 Kbytes/s)
[oracle@jyrac1 xtts]$ ls -lrt
total 320
-rw-r--r-- 1 oracle oinstall 65536 Aug 18 11:22 06sc73nf_1_1
-rw-r--r-- 1 oracle oinstall 253952 Aug 18 11:22 07sc73ng_1_1
Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace:
Please Do the following:
--------------------------
1. Copy fixnewdf.txt from source to destination temp dir
2. Copy backups:
from to the in destination
3. On Destination, run $ORACLE_HOME/perl/bin/perl xttdriver.pl --fixnewdf
4. Re-execute the incremental backup in source:
$ORACLE_HOME/perl/bin/perl xttdriver.pl --bkpincr
NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option:
$ORACLE_HOME/perl/bin/perl xttdriver.pl -L --bkpincr
These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.
IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_11_23_16_532//Aug18_Fri_11_23_16_532_.log
=============================================================
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:16 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:16 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:16 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:23:17 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:23:17 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: ''''''''''''
--------------------------------------------------------------------
Starting incremental backup
--------------------------------------------------------------------
--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------
[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 11:02:13 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,46,249)
150 Opening data connection for /bin/ls.
total 1120
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:23 08sc7597_1_1
-rw-r----- 1 oracle11 oinstall 204800 Aug 18 11:23 09sc7598_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> bin
200 Type set to I.
ftp> get 08sc7597_1_1
local: 08sc7597_1_1 remote: 08sc7597_1_1
227 Entering Passive Mode (10,138,129,2,47,4)
150 Opening data connection for 08sc7597_1_1 (49152 bytes).
226 Transfer complete.
49152 bytes received in 0.0013 seconds (3.7e+04 Kbytes/s)
ftp> get 09sc7598_1_1
local: 09sc7598_1_1 remote: 09sc7598_1_1
227 Entering Passive Mode (10,138,129,2,47,9)
150 Opening data connection for 09sc7598_1_1 (204800 bytes).
226 Transfer complete.
204800 bytes received in 0.0029 seconds (7e+04 Kbytes/s)
IBMP740-2:/oracle11/xtts_script$$ORACLE_HOME/perl/bin/perl xttdriver.pl -i
============================================================
trace file is /oracle11/xtts_script/incremental_Aug18_Fri_11_33_18_477//Aug18_Fri_11_33_18_477_.log
=============================================================
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
Prepare source for Tablespaces:
'CDZJ' /u01/xtts
xttpreparesrc.sql for 'CDZJ' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'LDJC' /u01/xtts
xttpreparesrc.sql for 'LDJC' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
Prepare source for Tablespaces:
'''' /u01/xtts
xttpreparesrc.sql for '''' started at Fri Aug 18 11:33:18 2017
xttpreparesrc.sql for ended at Fri Aug 18 11:33:18 2017
============================================================
No new datafiles added
=============================================================
Prepare newscn for Tablespaces: 'CDZJ'
Prepare newscn for Tablespaces: 'LDJC'
Prepare newscn for Tablespaces: ''''''''''''
--------------------------------------------------------------------
Starting incremental backup
--------------------------------------------------------------------
--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------
[oracle@jyrac1 xtts_script]$ ftp 10.138.129.2
Connected to 10.138.129.2.
220 IBMP740-2 FTP server (Version 4.2 Mon Nov 28 14:12:02 CST 2011) ready.
502 authentication type cannot be set to GSSAPI
502 authentication type cannot be set to KERBEROS_V4
KERBEROS_V4 rejected as an authentication type
Name (10.138.129.2:oracle): oracle
331 Password required for oracle.
Password:
230-Last unsuccessful login: Wed Dec 3 10:20:09 BEIST 2014 on /dev/pts/0 from 10.138.130.31
230-Last login: Fri Aug 18 11:26:03 BEIST 2017 on ftp from ::ffff:10.138.130.151
230 User oracle logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> cd /oracle11/backup
250 CWD command successful.
ftp> ls -lrt
227 Entering Passive Mode (10,138,129,2,48,62)
150 Opening data connection for /bin/ls.
total 1632
-rw-r----- 1 oracle11 oinstall 65536 Aug 18 10:56 06sc73nf_1_1
-rw-r----- 1 oracle11 oinstall 253952 Aug 18 10:56 07sc73ng_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:23 08sc7597_1_1
-rw-r----- 1 oracle11 oinstall 204800 Aug 18 11:23 09sc7598_1_1
-rw-r----- 1 oracle11 oinstall 49152 Aug 18 11:33 0asc75s0_1_1
-rw-r----- 1 oracle11 oinstall 212992 Aug 18 11:33 0bsc75s2_1_1
226 Transfer complete.
ftp> lcd /u01/xtts
Local directory now /u01/xtts
ftp> get 0asc75s0_1_1
local: 0asc75s0_1_1 remote: 0asc75s0_1_1
227 Entering Passive Mode (10,138,129,2,48,73)
150 Opening data connection for 0asc75s0_1_1 (49152 bytes).
226 Transfer complete.
49152 bytes received in 0.0015 seconds (3.3e+04 Kbytes/s)
ftp> get 0bsc75s2_1_1
local: 0bsc75s2_1_1 remote: 0bsc75s2_1_1
227 Entering Passive Mode (10,138,129,2,48,76)
150 Opening data connection for 0bsc75s2_1_1 (212992 bytes).
226 Transfer complete.
212992 bytes received in 0.0032 seconds (6.6e+04 Kbytes/s)
SQL> create directory dump_dir as '/u01/xtts_script';
Directory created.
SQL> grant read,write on directory dump_dir to public;
Grant succeeded.
在目标数据库中创建用户方案LDJC,CDZJ
SQL> create user ldjc identified by "ldjc";
User created.
SQL> grant dba,connect,resource to ldjc;
Grant succeeded.
SQL> create user cdzj identified by "cdzj";
User created.
SQL> grant dba,connect,resource to cdzj;
Grant succeeded.
[oracle@jyrac1 xtts_script]$ impdp system/abcd directory=dump_dir logfile=tts_imp.log network_link=ttslink transport_full_check=no transport_tablespaces=CDZJ,LDJC transport_datafiles='+DATADG/jyrac/datafile/cdzj01','+DATADG/jyrac/datafile/ldjc01'
Import: Release 11.2.0.4.0 - Production on Fri Aug 18 12:05:05 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03": system/******** directory=dump_dir logfile=tts_imp.log network_link=ttslink transport_full_check=no transport_tablespaces=CDZJ,LDJC transport_datafiles=+DATADG/jyrac/datafile/cdzj01,+DATADG/jyrac/datafile/ldjc01
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_03" successfully completed at Fri Aug 18 12:07:05 2017 elapsed 0 00:01:52
[oracle@jyrac1 xtts_script]$ impdp system/abcd directory=dump_dir logfile=ysj.log schemas=ldjc,cdzj content=metadata_only exclude=table,index network_link=ttslink
Import: Release 11.2.0.4.0 - Production on Fri Aug 18 12:09:15 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01": system/******** directory=dump_dir logfile=ysj.log schemas=ldjc,cdzj content=metadata_only exclude=table,index network_link=ttslink
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"LDJC" already exists
ORA-31684: Object type USER:"CDZJ" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
ORA-39082: Object type VIEW:"LDJC"."TEMP_AAB002" created with compilation warnings
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
ORA-39082: Object type PACKAGE_BODY:"LDJC"."QUEST_SOO_PKG" created with compilation warnings
ORA-39082: Object type PACKAGE_BODY:"LDJC"."QUEST_SOO_SQLTRACE" created with compilation warnings
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCOBJ
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 5 error(s) at Fri Aug 18 12:09:46 2017 elapsed 0 00:00:30
SQL> select * from ldjc.jy_test;
USER_ID
---------------------
7
8
8
9
1
2
3
4
5
6
10 rows selected
SQL> select * from cdzj.jy_test;
USER_ID
---------------------
7
8
9
1
2
3
4
5
6
9 rows selected
元数据导入后,可以将源数据库中的表空间ldjc,cdzj修改为read write状态
SQL> alter tablespace ldjc read write;
Tablespace altered.
SQL> alter tablespace cdzj read write;
Tablespace altered.
dfcopydir:源系统中用来存储xttdriver.pl -p操作所生成的数据文件副本目录。这个目录要有足够的空间来存储所有被传输表空间的数据文件副本。这个目录可以是目标系统上通过NFS-mounted文件系统所挂载到源系统中的一个目录,在这种情况下,目标系统中的stageondest参数也引用这个相同的NFS目录。可以参考See Note 359515.1 for mount option guidelines。 这个参数只有使用RMAN备份生成数据文件副本时才使用,例如dfcopydir=/stage_source
stageondest:目标系统中存储从源系统中手动传输过来的数据文件副本。这个目录要有足够的空间来存储数据文件副本。这个目录同时也是用来存储从源系统传输过来的增量备份文件的目录。在目标系统上执行xttdriver.pl -c转换数据文件与执行xttdriver.pl -r前滚数据文件时会从这个目录中读取数据文件副本与增量备份文件。这个目标也可以是一个DBFS-mounted文件系统。个目录可以是源系统上通过NFS-mounted文件系统所挂载到目标系统中的一个目录,在这种情况下,源系统中的backupformat参数与dfcopydir参数就会引用这个相同的NFS目录。可以参考See Note 359515.1 for mount option guidelines。例如stageondest=/stage_dest
1.##ct66rac01 ##在目标端实例上建连接源端的dblink和用于存放数据文件的目录directory. #此步骤是为了最近通过impdp dblink的方式导入数据文件到目标端,如果准备采用本地导入则不需要建dblink. [oracle@ct66rac01 ~]$ cd /u01/app/oracle/product/11.2.0/db_1/network/admin/ [oracle@ct66rac01 admin]$ vi tnsnames.ora CTDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.108.56.120)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ctdb) ) ) [oracle@ct66rac01 dbs]$ ORACLE_SID=ctdb [oracle@ct66rac01 ~]$ sqlplus / as sysdba SQL> create directory dump_oradata as '+DATA'; SQL> grant read,write on directory dump_oradata to public; # SQL> create public database link lnk_ctdb connect to system identified by system using 'ctdb'; create public database link lnk_afa_hp connect to dbmgr identified by db1234DBA using 'afa_hp'; SQL> select * from dual@lnk_afa_hp; /* DUMMY X */ SQL> exit
[oracle@ct66rac01 ~]$ su - [root@ct66rac01 oracle]# service nfs status [root@ct66rac01 ~]# cat /etc/exports /home/oracle/xtts *(rw,sync,no_root_squash,insecure,anonuid=500,anongid=500) [root@ct66rac01 oracle]# service nfs start
3.##ct6604 ##在源端建立测试用的用户,表空间,表,权限. #此处的权限和表用于迁移之后的验证 [oracle@ct6604 ~]$ ORACLE_SID=ctdb [oracle@ct6604 ~]$ sqlplus / as sysdba SQL> create tablespace tbs01 datafile '/u02/oradata/ctdb/tbs01.dbf' size 10m autoextend on next 2m maxsize 4g; SQL> create tablespace tbs02 datafile '/u02/oradata/ctdb/tbs02.dbf' size 10m autoextend on next 2m maxsize 4g;
SQL> create user test01 identified by test01 default tablespace tbs01; SQL> create user test02 identified by test02 default tablespace tbs02; SQL> grant connect,resource to test01; SQL> grant connect,resource to test02; SQL> grant execute on dbms_crypto to test02;
SQL> create table test01.tb01 as select * from dba_objects; SQL> create table test02.tb01 as select * from dba_objects; SQL> grant select on test01.tb01 to test02; SQL> exit
4.##ct6604 ##在源端连接目标端的nfs,mount到/home/oracle/xtts下. [oracle@ct6604 ~]$ mkdir /oracle2/xtts [oracle@ct6604 ~]$ su - [root@ct6604 ~]# showmount -e 25.10.0.31 Export list for 192.108.56.101: /home/oracle/xtts * [root@ct6604 ~]# mount -t nfs 25.10.0.31:/home/oracle/xtts /oracle2/xtts
mount -F nfs 25.10.0.31:/home/oracle/xtts /home/oracle/xtts
Prepare source for Tablespaces: 'TBS01' /home/oracle/xtts/backup xttpreparesrc.sql for 'TBS01' started at Thu Jul 26 17:42:49 2018 xttpreparesrc.sql for ended at Thu Jul 26 17:42:50 2018
Prepare source for Tablespaces: 'TBS02' /home/oracle/xtts/backup xttpreparesrc.sql for 'TBS02' started at Thu Jul 26 17:43:02 2018 xttpreparesrc.sql for ended at Thu Jul 26 17:43:02 2018
-------------------------------------------------------------------- Done with prepare phase --------------------------------------------------------------------
-------------------------------------------------------------------- Find list of datafiles in system --------------------------------------------------------------------
-------------------------------------------------------------------- Done finding list of datafiles in system -------------------------------------------------------------------- -------------------------------------------------------------------- Done finding list of datafiles in system --------------------------------------------------------------------
-------------------------------------------------------------------- Backup incremental -------------------------------------------------------------------- Prepare newscn for Tablespaces: 'TBS01' Prepare newscn for Tablespaces: 'TBS02' Prepare newscn for Tablespaces: '' rman target / cmdfile /home/oracle/xtts/script/rmanincr.cmd
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Jul 26 18:18:03 2018
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: AFA (DBID=1231362390)
RMAN> set nocfau; 2> host 'echo ts::TBS01'; 3> backup incremental from scn 3168157425 4> tag tts_incr_update tablespace 'TBS01' format 5> '/home/oracle/xtts/backup/%U'; 6> set nocfau; 7> host 'echo ts::TBS02'; 8> backup incremental from scn 3168157441 9> tag tts_incr_update tablespace 'TBS02' format 10> '/home/oracle/xtts/backup/%U'; 11> executing command: SET NOCFAU using target database control file instead of recovery catalog
ts::TBS01 host command complete
Starting backup at 26-JUL-18 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=168 devtype=DISK channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00025 name=/datalv03/afa/tbs01.dbf channel ORA_DISK_1: starting piece 1 at 26-JUL-18 channel ORA_DISK_1: finished piece 1 at 26-JUL-18 piece handle=/home/oracle/xtts/backup/7et904et_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07 Finished backup at 26-JUL-18
executing command: SET NOCFAU
ts::TBS02 host command complete
Starting backup at 26-JUL-18 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backupset channel ORA_DISK_1: specifying datafile(s) in backupset input datafile fno=00026 name=/datalv03/afa/tbs02.dbf channel ORA_DISK_1: starting piece 1 at 26-JUL-18 channel ORA_DISK_1: finished piece 1 at 26-JUL-18 piece handle=/home/oracle/xtts/backup/7ft904f5_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35 Finished backup at 26-JUL-18
Recovery Manager complete.
-------------------------------------------------------------------- Done backing up incrementals --------------------------------------------------------------------
Recovery Manager: Release 10.2.0.4.0 - Production on Fri Jul 27 10:52:12 2018
Copyright (c) 1982, 2007, Oracle. All rights reserved.
RMAN-06005: connected to target database: AFA (DBID=1231362390)
RMAN> set nocfau; 2> host 'echo ts::TBS01'; 3> backup incremental from scn 3168157425 4> tag tts_incr_update tablespace 'TBS01' format 5> '/home/oracle/xtts/backup/%U'; 6> set nocfau; 7> host 'echo ts::TBS02'; 8> backup incremental from scn 3168157441 9> tag tts_incr_update tablespace 'TBS02' format 10> '/home/oracle/xtts/backup/%U'; 11> RMAN-03023: executing command: SET NOCFAU RMAN-06009: using target database control file instead of recovery catalog
ts::TBS01 RMAN-06134: host command complete
RMAN-03090: Starting backup at 27-JUL-18 RMAN-08030: allocated channel: ORA_DISK_1 RMAN-08500: channel ORA_DISK_1: sid=56 devtype=DISK RMAN-08008: channel ORA_DISK_1: starting full datafile backupset RMAN-08010: channel ORA_DISK_1: specifying datafile(s) in backupset RMAN-08522: input datafile fno=00025 name=/datalv03/afa/tbs01.dbf RMAN-08038: channel ORA_DISK_1: starting piece 1 at 27-JUL-18 RMAN-08044: channel ORA_DISK_1: finished piece 1 at 27-JUL-18 RMAN-08530: piece handle=/home/oracle/xtts/backup/7gt91un0_1_1 tag=TTS_INCR_UPDATE comment=NONE RMAN-08540: channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 RMAN-03091: Finished backup at 27-JUL-18
RMAN-03023: executing command: SET NOCFAU
ts::TBS02 RMAN-06134: host command complete
RMAN-03090: Starting backup at 27-JUL-18 RMAN-12016: using channel ORA_DISK_1 RMAN-08008: channel ORA_DISK_1: starting full datafile backupset RMAN-08010: channel ORA_DISK_1: specifying datafile(s) in backupset RMAN-08522: input datafile fno=00026 name=/datalv03/afa/tbs02.dbf RMAN-08038: channel ORA_DISK_1: starting piece 1 at 27-JUL-18 RMAN-08044: channel ORA_DISK_1: finished piece 1 at 27-JUL-18 RMAN-08530: piece handle=/home/oracle/xtts/backup/7ht91ung_1_1 tag=TTS_INCR_UPDATE comment=NONE RMAN-08540: channel ORA_DISK_1: backup set complete, elapsed time: 00:01:05 RMAN-03091: Finished backup at 27-JUL-18
Recovery Manager complete.
TSNAME:TBS01 TSNAME:TBS02
-------------------------------------------------------------------- Done backing up incrementals --------------------------------------------------------------------
Can't locate Exporter/Heavy.pm in @INC (@INC contains: /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/lib /db/ebank/app/11.2.0/grid/lib/asmcmd /db/ebank/app/11.2.0/grid/rdbms/lib/asmcmd /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /db/ebank/app/11.2.0/grid/perl/lib/site_perl/5.10.0 /db/ebank/app/11.2.0/grid/perl/lib/site_perl .) at /db/ebank/app/11.2.0/grid/perl/lib/5.10.0/Exporter.pm line 18. BEGIN failed--compilation aborted at /db/ebank/app/11.2.0/grid/bin/asmcmdcore line 146. ASMCMD:
-------------------------------------------------------------------- End of rollforward phase --------------------------------------------------------------------
15.##ct66rac01 ##在目标端新建用户,导入传输表空间 [oracle@ct66rac01 ~]$ ORACLE_SID=rac11g1 [oracle@ct66rac01 xtts]$ sqlplus / as sysdba SQL> create user test01 identified by test01 ; SQL> create user test02 identified by test02 ; SQL> grant connect,resource to test01; SQL> grant connect,resource to test02; SQL> exit
Import: Release 11.2.0.4.0 - Production on Fri Jan 15 17:18:14 2016
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Username: system Password:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** directory=dump_oradata nologfile=y network_link=lnk_ctdb transport_full_check=no transport_tablespaces=TBS01,TBS02 transport_datafiles=+DATA/tbs01_5.xtf,+DATA/tbs02_6.xtf Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Jan 15 17:19:07 2016 elapsed 0 00:00:48
16.##ct66rac01 ##在目标端验证导入的数据和权限和源端是否一致 #此处发现源端给test02用户的execute on dbms_crypto权限没有导入,这是impdp原本的问题.所以在做xtts之前就要确定好这些权限的问题,以减少停机时间. [oracle@ct66rac01 xtts]$ sqlplus / as sysdba SQL> alter tablespace tbs01 read write; SQL> alter tablespace tbs02 read write; SQL> alter user test01 default tablespace tbs01; SQL> alter user test02 default tablespace tbs02;
SQL> select count(1) from test01.tb01; /* COUNT(1) 345732 */ SQL> select count(1) from test02.tb01; SQL> select * from dba_tab_privs where grantee='TEST02'; /* GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRANTABLE HIERARCHY TEST02 TEST01 TB01 TEST01 SELECT NO NO */ #select * from dba_tab_privs where owner ='SYS' and grantee='TEST02'; SQL> grant execute on dbms_crypto to test02; SQL> exit
? 权限对比sql
测试中的一些小问题: 1.报Cant find xttplan.txt, TMPDIR undefined at xttdriver.pl line 1185. 要注意设定环境变量TMPDIR=/home/oracle/xtts/script 2.Unable to fetch platform name 执行xttdriver.pl之前没有指定ORACLE_SID 3.Some failure occurred. Check /home/oracle/xtts/script/FAILED for more details If you have fixed the issue, please delete /home/oracle/xtts/script/FAILED and run it again OR run xttdriver.pl with -L option 执行xttdriver.pl报错后,下次执行要删除FAILED文件. 4.Can't locate strict.pm in @INC 使用$ORACLE_HOME/perl/bin/perl而不是使用perl
备注: 测试完成,比较简单吧.做好准备工作,通过在源端和目标端执行几次$ORACLE_HOME/perl/bin/perl xttdriver.pl,再执行impdp就完成.此测试中使用nfs可以省去文件的传输,使用整个操作方便清晰许多. 减少迁移停机时间的goldengate也是不错.另外整库迁移如果平台不同或相同,但字节顺序相同,可先考虑dataguard,Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration (文档 ID 413484.1).
在平时工作中,难免会遇到把 XML 作为数据存储格式。面对目前种类繁多的解决方案,哪个最适合我们呢?在这篇文章中,我对这四种主流方案做一个不完全评测,仅仅针对遍历 XML 这块来测试,因为遍历 XML 是工作中使用最多的(至少我认为)。 预 备 测试环境: AMD 毒龙1.4G OC 1.5G、256M DDR333、Windows2000 Server
Netty 3.x的user guide里FrameDecoder的例子,有几个疑问:
1.文档说:FrameDecoder calls decode method with an internally maintained cumulative buffer whenever new data is received.
为什么每次有新数据到达时,都会调用decode方法?
2.Dec
hive> select * from t_test where ds=20150323 limit 2;
OK
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
问题原因: hive堆内存默认为256M
这个问题的解决方法为:
修改/us
Simply do the following:
I. Declare a global variable:
var markersArray = [];
II. Define a function:
function clearOverlays() {
for (var i = 0; i < markersArray.length; i++ )
Quick sort is probably used more widely than any other. It is popular because it is not difficult to implement, works well for a variety of different kinds of input data, and is substantially faster t