rac onenode 的relocation、convert

本文测试rac onenode的relocation ,convert(rac onenode到rac的双向转换)

参考文档:

https://docs.oracle.com/cd/E11882_01/rac.112/e41960/onenode.htm#RACAD7898

rac onenode 的relocation

切换前,数据库和服务运行在节点1 

[grid@onenode1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.OCR.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.asm
               ONLINE  ONLINE       onenode1                 Started             
               ONLINE  ONLINE       onenode2                 Started             
ora.gsd
               OFFLINE OFFLINE      onenode1                                     
               OFFLINE OFFLINE      onenode2                                     
ora.net1.network
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.ons
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.registry.acfs
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       onenode1                                     
ora.cvu
      1        ONLINE  ONLINE       onenode1                                     
ora.oc4j
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode.db
      1        ONLINE  ONLINE       onenode1                 Open                
ora.onenode.one.svc
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode1.vip
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode2.vip
      1        ONLINE  ONLINE       onenode2                                     
ora.scan1.vip
      1        ONLINE  ONLINE       onenode1                                     
[grid@onenode1 ~]$ 
[grid@onenode1 ~]$ srvctl config service -d onenode
Service name: one
Service is enabled
Server pool: onenode
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition: 
Preferred instances: onenode_1
Available instances: 
[grid@onenode1 ~]$ 

relocation 数据库到节点2 ,在任何一个节点上relocation都可以,需要在root权限下执行 

[grid@onenode2 ~]$ srvctl relocate database -d onenode -n onenode2
PRCD-1222 : Online relocation of database "onenode" failed but database was restored to its original state
PRCR-1037 : Failed to update cardinality of the resource ora.onenode.db to 2
PRCR-1071 : Failed to register or update resource ora.onenode.db
CRS-0245:  User doesn't have enough privilege to perform the operation
[grid@onenode2 ~]$ su root
Password: 
[root@onenode2 grid]# srvctl relocate database -d onenode -n onenode2
[root@onenode2 grid]# 

分别查看节点1和节点2上的alert log

 [oracle@onenode2 trace]$ more alert_onenode_2.log 
Mon Mar 19 16:22:09 2018
Starting ORACLE instance (normal)
************************ Large Pages Information *******************
Per process system memlock (soft) limit = UNLIMITED
 
Total Shared Global Region in Large Pages = 0 KB (0%)
 
Large Pages used by this instance: 0 (0 KB)
Large Pages unused system wide = 0 (0 KB)
Large Pages configured system wide = 0 (0 KB)
Large Page size = 2048 KB
 
RECOMMENDATION:
  Total System Global Area size is 906 MB. For optimal performance,
  prior to the next instance restart:
  1. Increase the number of unused large pages by 
 at least 453 (page size 2048 KB, total size 906 MB) system wide to
  get 100% of the System Global Area allocated with large pages
********************************************************************
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 1
Number of processor cores in the system is 1
Number of processor sockets in the system is 1
Private Interface 'eth2:1' configured from GPnP for use as a private interconnect.
  [name='eth2:1', type=1, ip=169.254.230.126, mac=08-00-27-9a-1e-0c, net=169.254.0.0/16, mask=255.255.0.0, use=haip:cluster_interconnect/62]
Public Interface 'eth1' configured from GPnP for use as a public interface.
  [name='eth1', type=1, ip=192.168.2.62, mac=08-00-27-79-b4-56, net=192.168.2.0/24, mask=255.255.255.0, use=public/1]
Public Interface 'eth1:1' configured from GPnP for use as a public interface.
  [name='eth1:1', type=1, ip=192.168.2.72, mac=08-00-27-79-b4-56, net=192.168.2.0/24, mask=255.255.255.0, use=public/1]
Shared memory segment for instance monitoring created
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
    NUMA status: non-NUMA system
    cellaffinity.ora status: N/A
CELL communication will use 1 IP group(s):
    Grp 0: 
Picked latch-free SCN scheme 3
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
WARNING: db_recovery_file_dest is same as db_create_file_dest
Autotune of undo retention is turned on. 
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options.
ORACLE_HOME = /u01/app/oracle/product/11.2.0/db_1
System name:	Linux
Node name:	onenode2
Release:	2.6.32-696.el6.x86_64
Version:	#1 SMP Tue Feb 21 00:53:17 EST 2017
Machine:	x86_64
Using parameter settings in server-side pfile /u01/app/oracle/product/11.2.0/db_1/dbs/initonenode_2.ora
System parameters with non-default values:
  processes                = 150
  spfile                   = "+DATA/onenode/spfileonenode.ora"
  sga_target               = 904M
  control_files            = "+DATA/onenode/controlfile/current.261.971110795"
  control_files            = "+DATA/onenode/controlfile/current.260.971110795"
  db_block_size            = 8192
  compatible               = "11.2.0.4.0"
  log_archive_format       = "%t_%s_%r.dbf"
  cluster_database         = TRUE
  db_create_file_dest      = "+DATA"
  db_recovery_file_dest    = "+DATA"
  db_recovery_file_dest_size= 20G
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = ""
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=onenodeXDB)"
  remote_listener          = "onenode-scan:1521"
  audit_file_dest          = "/u01/app/oracle/admin/onenode/adump"
  audit_trail              = "DB"
  db_name                  = "onenode"
  open_cursors             = 300
  pga_aggregate_target     = 301M
  diagnostic_dest          = "/u01/app/oracle"
Cluster communication is configured to use the following interface(s) for this instance
  169.254.230.126
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
Mon Mar 19 16:22:20 2018
PMON started with pid=2, OS id=5987 
Mon Mar 19 16:22:20 2018
PSP0 started with pid=3, OS id=5989 
Mon Mar 19 16:22:21 2018
VKTM started with pid=4, OS id=5991 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Mon Mar 19 16:22:21 2018
GEN0 started with pid=5, OS id=5995 
Mon Mar 19 16:22:21 2018
DIAG started with pid=6, OS id=5997 
Mon Mar 19 16:22:21 2018
DBRM started with pid=7, OS id=5999 
Mon Mar 19 16:22:21 2018
PING started with pid=8, OS id=6001 
Mon Mar 19 16:22:21 2018
ACMS started with pid=9, OS id=6003 
Mon Mar 19 16:22:21 2018
DIA0 started with pid=10, OS id=6005 
Mon Mar 19 16:22:21 2018
LMON started with pid=11, OS id=6007 
Mon Mar 19 16:22:21 2018
LMD0 started with pid=12, OS id=6009 
* Load Monitor used for high load check 
* New Low - High Load Threshold Range = [960 - 1280] 
Mon Mar 19 16:22:21 2018
LMS0 started with pid=13, OS id=6011 at elevated priority
Mon Mar 19 16:22:21 2018
RMS0 started with pid=14, OS id=6015 
Mon Mar 19 16:22:21 2018
LMHB started with pid=15, OS id=6017 
Mon Mar 19 16:22:21 2018
MMAN started with pid=16, OS id=6019 
Mon Mar 19 16:22:21 2018
DBW0 started with pid=17, OS id=6021 
Mon Mar 19 16:22:21 2018
LGWR started with pid=18, OS id=6023 
Mon Mar 19 16:22:21 2018
CKPT started with pid=19, OS id=6025 
Mon Mar 19 16:22:22 2018
SMON started with pid=20, OS id=6027 
Mon Mar 19 16:22:22 2018
RECO started with pid=21, OS id=6029 
Mon Mar 19 16:22:22 2018
RBAL started with pid=22, OS id=6031 
Mon Mar 19 16:22:22 2018
ASMB started with pid=23, OS id=6033 
Mon Mar 19 16:22:22 2018
MMON started with pid=24, OS id=6035 
NOTE: initiating MARK startup 
Mon Mar 19 16:22:22 2018
starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
Starting background process MARK
Mon Mar 19 16:22:22 2018
MMNL started with pid=25, OS id=6039 
Mon Mar 19 16:22:22 2018
MARK started with pid=26, OS id=6041 
NOTE: MARK has subscribed 
starting up 1 shared server(s) ...
lmon registered with NM - instance number 2 (internal mem no 1)
Reconfiguration started (old inc 0, new inc 4)
List of instances:
 1 2 (myinst: 2) 
 Global Resource Directory frozen
* allocate domain 0, invalid = TRUE 
 Communication channels reestablished
 * domain 0 valid = 1 according to instance 1 
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Submitted all GCS remote-cache requests
 Fix write in gcs resources
Reconfiguration complete
Mon Mar 19 16:22:24 2018
LCK0 started with pid=30, OS id=6054 
Starting background process RSMN
Mon Mar 19 16:22:24 2018
RSMN started with pid=31, OS id=6056 
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Mon Mar 19 16:22:25 2018
ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.72)(PORT=1521))' SCOPE=MEMORY SID='onenode_2';
ALTER DATABASE MOUNT /* db agent *//* {2:37189:197} */
NOTE: Loaded library: System 
SUCCESS: diskgroup DATA was mounted
NOTE: dependency between database onenode and diskgroup resource ora.DATA.dg is established
Successful mount of redo thread 2, with mount id 2692565945
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Lost write protection disabled
Create Relation IPS_PACKAGE_UNPACK_HISTORY
Completed: ALTER DATABASE MOUNT /* db agent *//* {2:37189:197} */
ALTER DATABASE OPEN /* db agent *//* {2:37189:197} */
Picked broadcast on commit scheme to generate SCNs
ARCH: STARTING ARCH PROCESSES
Mon Mar 19 16:22:35 2018
ARC0 started with pid=33, OS id=6087 
Mon Mar 19 16:22:36 2018
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Mon Mar 19 16:22:36 2018
ARC1 started with pid=35, OS id=6089 
Mon Mar 19 16:22:36 2018
ARC2 started with pid=36, OS id=6091 
ARC1: Archival started
ARC2: Archival started
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
Mon Mar 19 16:22:36 2018
ARC3 started with pid=37, OS id=6093 
Mon Mar 19 16:22:36 2018
Thread 2 opened at log sequence 2
  Current log# 4 seq# 2 mem# 0: +DATA/onenode/onlinelog/group_4.273.971111363
  Current log# 4 seq# 2 mem# 1: +DATA/onenode/onlinelog/group_4.274.971111371
Successful open of redo thread 2
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Mon Mar 19 16:22:36 2018
SMON: enabling cache recovery
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
[6060] Successfully onlined Undo Tablespace 5.
Undo initialization finished serial:0 start:172224 end:173164 diff:940 (9 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Database Characterset is WE8MSWIN1252
Mon Mar 19 16:22:40 2018
minact-scn: Inst 2 is a slave inc#:4 mmon proc-id:6035 status:0x2
minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000
No Resource Manager plan active
Starting background process GTX0
Mon Mar 19 16:22:44 2018
GTX0 started with pid=39, OS id=6101 
Starting background process RCBG
Mon Mar 19 16:22:44 2018
RCBG started with pid=40, OS id=6113 
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Mon Mar 19 16:22:45 2018
QMNC started with pid=41, OS id=6124 
Mon Mar 19 16:22:53 2018
Completed: ALTER DATABASE OPEN /* db agent *//* {2:37189:197} */
Mon Mar 19 16:22:58 2018
ALTER SYSTEM SET service_names='one' SCOPE=MEMORY SID='onenode_2';
Mon Mar 19 16:22:59 2018
Starting background process CJQ0
Mon Mar 19 16:22:59 2018
CJQ0 started with pid=47, OS id=6290 
Mon Mar 19 16:23:02 2018
minact-scn: Inst 2 is now the master inc#:4 mmon proc-id:6035 status:0x7
minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000
minact-scn: Master returning as live inst:1 has inc# mismatch instinc:0 cur:4 errcnt:0
Mon Mar 19 16:23:20 2018
Reconfiguration started (old inc 4, new inc 6)
List of instances:
 2 (myinst: 2) 
 Global Resource Directory frozen
 * dead instance detected - domain 0 invalid = TRUE 
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Mon Mar 19 16:23:20 2018
 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
 Post SMON to start 1st pass IR
Mon Mar 19 16:23:20 2018
Instance recovery: looking for dead threads
 Submitted all GCS remote-cache requests
 Post SMON to start 1st pass IR
 Fix write in gcs resources
Reconfiguration complete
Instance recovery: lock domain invalid but no dead threads
Mon Mar 19 16:23:23 2018
Restarting dead background process DIA0
Mon Mar 19 16:23:23 2018
DIA0 started with pid=10, OS id=6336 
Mon Mar 19 16:24:20 2018
Decreasing number of real time LMS from 1 to 0
[oracle@onenode2 trace]$ 
Mon Mar 19 16:22:23 2018
Reconfiguration started (old inc 2, new inc 4)
List of instances:
 1 2 (myinst: 1) 
 Global Resource Directory frozen
 Communication channels reestablished
 Master broadcasted resource hash value bitmaps
 Non-local Process blocks cleaned out
Mon Mar 19 16:22:23 2018
 LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
 Set master node info 
 Submitted all remote-enqueue requests
 Dwn-cvts replayed, VALBLKs dubious
 All grantable enqueues granted
Mon Mar 19 16:22:23 2018
minact-scn: master found reconf/inst-rec after rec-scn scan old-inc#:2 new-inc#:4
 Submitted all GCS remote-cache requests
 Fix write in gcs resources
Reconfiguration complete
minact-scn: Master returning as live inst:2 has inc# mismatch instinc:0 cur:4 errcnt:0
Mon Mar 19 16:22:57 2018
ALTER SYSTEM SET service_names='onenode' SCOPE=MEMORY SID='onenode_1';
Shutting down instance (transactional local)
Stopping background process SMCO
Shutting down instance: further logons disabled
Stopping background process QMNC
Mon Mar 19 16:23:00 2018
Stopping background process CJQ0
Stopping background process MMNL
Stopping background process MMON
Local transactions complete. Performing immediate shutdown
License high water mark = 9
All dispatchers and shared servers shutdown
ALTER SYSTEM SET _shutdown_completion_timeout_mins=30 SCOPE=MEMORY;
ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:37189:246} */
Mon Mar 19 16:23:05 2018
SMON: disabling tx recovery
Stopping background process RCBG
SMON: disabling cache recovery
Mon Mar 19 16:23:07 2018
NOTE: Deferred communication with ASM instance
NOTE: deferred map free for map id 20
Mon Mar 19 16:23:07 2018
NOTE: Deferred communication with ASM instance
Redo thread 1 internally disabled at seq 1 (LGWR)
Shutting down archive processes
Archiving is disabled
Mon Mar 19 16:23:08 2018
ARCH shutting down
ARC3: Archival stopped
Mon Mar 19 16:23:08 2018
ARCH shutting down
ARC1: Archival stopped
Mon Mar 19 16:23:08 2018
ARCH shutting down
ARC0: Archival stopped
Mon Mar 19 16:23:09 2018
ARC2: Archiving disabled thread 1 sequence 1
Thread 1 closed at log sequence 1
Successful close of redo thread 1
Mon Mar 19 16:23:13 2018
NOTE: Deferred communication with ASM instance
NOTE: deferred map free for map id 5
Mon Mar 19 16:23:13 2018
Completed: ALTER DATABASE CLOSE NORMAL /* db agent *//* {2:37189:246} */
ALTER DATABASE DISMOUNT /* db agent *//* {2:37189:246} */
Shutting down archive processes
Archiving is disabled
Archived Log entry 2 added for thread 1 sequence 1 ID 0xa07c1e47 dest 1:
ARC2: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
ARCH shutting down
ARC2: Archival stopped
Mon Mar 19 16:23:17 2018
NOTE: Deferred communication with ASM instance
NOTE: deferred map free for map id 3
Completed: ALTER DATABASE DISMOUNT /* db agent *//* {2:37189:246} */
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Mar 19 16:23:18 2018
NOTE: force a map free for map id 3
Mon Mar 19 16:23:18 2018
Stopping background process VKTM
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Mon Mar 19 16:23:18 2018
NOTE: force a map free for map id 35
NOTE: force a map free for map id 34
Mon Mar 19 16:23:18 2018
NOTE: Shutting down MARK background process
Mon Mar 19 16:23:19 2018
freeing rdom 0
Mon Mar 19 16:23:25 2018
Instance shutdown complete
[oracle@onenode1 trace]$ 

查看集群情况, 可以看到db 和服务等资源,是在节点2上的

[root@onenode2 grid]# crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.OCR.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.asm
               ONLINE  ONLINE       onenode1                 Started             
               ONLINE  ONLINE       onenode2                 Started             
ora.gsd
               OFFLINE OFFLINE      onenode1                                     
               OFFLINE OFFLINE      onenode2                                     
ora.net1.network
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.ons
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.registry.acfs
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       onenode1                                     
ora.cvu
      1        ONLINE  ONLINE       onenode1                                     
ora.oc4j
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode.db
      2        ONLINE  ONLINE       onenode2                 Open                
ora.onenode.one.svc
      1        ONLINE  ONLINE       onenode2                                     
ora.onenode1.vip
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode2.vip
      1        ONLINE  ONLINE       onenode2                                     
ora.scan1.vip
      1        ONLINE  ONLINE       onenode1                                     
[root@onenode2 grid]# 
[root@onenode2 grid]# srvctl config service -d onenode
Service name: one
Service is enabled
Server pool: onenode
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition: 
Preferred instances: onenode_2
Available instances: 
[root@onenode2 grid]# 

relocation 测试完毕。 


转换rac onenode到rac

-n onenode1 ,因为节点1上没有运行db,出错,需要 -n onenode2 

[root@onenode1 ~]# su - oracle
[oracle@onenode1 ~]$ srvctl convert database -d onenode -c RAC -n onenode1
PRCD-1148 : Failed to convert the configuration of RAC One Node database onenode into its equivalent configuration for RAC database
PRCD-1151 : Failed to convert the configuration of RAC One Node database onenode into its equivalent RAC database configuration because RAC One Node database is running on a different server onenode2 than the given target server onenode1
[oracle@onenode1 ~]$ 

[oracle@onenode1 ~]$ srvctl convert database -d onenode -c RAC -n onenode2
[oracle@onenode1 ~]$

查看资源等状态, db 资源还是在节点2上 ,没有节点1上的db资源

[grid@onenode1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.OCR.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.asm
               ONLINE  ONLINE       onenode1                 Started             
               ONLINE  ONLINE       onenode2                 Started             
ora.gsd
               OFFLINE OFFLINE      onenode1                                     
               OFFLINE OFFLINE      onenode2                                     
ora.net1.network
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.ons
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.registry.acfs
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       onenode1                                     
ora.cvu
      1        ONLINE  ONLINE       onenode1                                     
ora.oc4j
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode.db
      2        ONLINE  ONLINE       onenode2                 Open                
ora.onenode.one.svc
      1        ONLINE  ONLINE       onenode2                                     
ora.onenode1.vip
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode2.vip
      1        ONLINE  ONLINE       onenode2                                     
ora.scan1.vip
      1        ONLINE  ONLINE       onenode1                                     
[grid@onenode1 ~]$ 

因为,需要添加instance

srvctl add instance -d onenode -i onenode_1 -n onenode1 

[oracle@onenode1 ~]$ srvctl add instance -d onenode -i onenode_1 -n onenode1
[oracle@onenode1 ~]$ 

srvctl config database -d onenode

[oracle@onenode1 ~]$ srvctl config database -d onenode
Database unique name: onenode
Database name: onenode
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/onenode/spfileonenode.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: onenode
Database instances: onenode_1,onenode_2
Disk Groups: DATA
Mount point paths: 
Services: one
Type: RAC
Database is administrator managed
[oracle@onenode1 ~]$ 

在新添加的instance节点上启动db

srvctl start database -d onenode 
[oracle@onenode1 ~]$ srvctl start database -d onenode
[oracle@onenode1 ~]$ 

查看资源状态,转换成rac了,两个节点上的db都打开了 。注意,要用srvctl启动db ,在sqlplus中启动db,显示的resource有问题 (测试的时候,发现用sqlplus启动db,启动后,在资源中能看到。但是是off状态的)

[grid@onenode1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.OCR.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.asm
               ONLINE  ONLINE       onenode1                 Started             
               ONLINE  ONLINE       onenode2                 Started             
ora.gsd
               OFFLINE OFFLINE      onenode1                                     
               OFFLINE OFFLINE      onenode2                                     
ora.net1.network
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.ons
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.registry.acfs
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       onenode1                                     
ora.cvu
      1        ONLINE  ONLINE       onenode1                                     
ora.oc4j
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode.db
      1        ONLINE  ONLINE       onenode1                 Open                
      2        ONLINE  ONLINE       onenode2                 Open                
ora.onenode.one.svc
      1        ONLINE  ONLINE       onenode2                                     
ora.onenode1.vip
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode2.vip
      1        ONLINE  ONLINE       onenode2                                     
ora.scan1.vip
      1        ONLINE  ONLINE       onenode1                                     
[grid@onenode1 ~]$ 

使用srvctl 命令查看db和服务的配置

[grid@onenode1 ~]$ srvctl config service -d onenode
Service name: one
Service is enabled
Server pool: onenode_one
Cardinality: 1
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: BASIC
Edition: 
Preferred instances: onenode_2
Available instances: 
[grid@onenode1 ~]$ srvctl config database -d onenode
Database unique name: onenode
Database name: onenode
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/onenode/spfileonenode.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: onenode
Database instances: onenode_1,onenode_2
Disk Groups: DATA
Mount point paths: 
Services: one
Type: RAC
Database is administrator managed
[grid@onenode1 ~]$ 

重新配置服务(略)

srvctl remove service -d onenode -s one
srvctl add service -d onenode -s onenodesrv -r RACDB1,RACDB2 -P BASIC

rac 转换成rac one .需要删除其他实例,确保只有一个instance 

[oracle@onenode1 ~]$ srvctl remove instance -d onenode -i onenode_1
Remove instance from the database onenode? (y/[n]) y
PRCD-1052 : Failed to remove instance from database onenode
PRCD-1101 : Failed to remove running instance onenode_1 for database onenode
[oracle@onenode1 ~]$ 

[oracle@onenode1 ~]$ srvctl remove instance -d onenode -i onenode_1
Remove instance from the database onenode? (y/[n]) y
[oracle@onenode1 ~]$ 

剩余一个节点

[grid@onenode1 ~]$ srvctl config database -d onenode
Database unique name: onenode
Database name: onenode
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/onenode/spfileonenode.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: onenode
Database instances: onenode_2
Disk Groups: DATA
Mount point paths: 
Services: one
Type: RAC
Database is administrator managed
[grid@onenode1 ~]$ 

删除服务(略,因为之前没有删除添加服务)

srvctl remove service -d onenode -s onenodesrv
srvctl add service -d onenode -s one -P BASIC

进行转换,转换到onenode

srvctl convert database -d onenode -c RACONENODE -w 30 -i onenode
[oracle@onenode2 ~]$ srvctl convert database -d onenode -c RACONENODE -w 30 -i onenode
[oracle@onenode2 ~]$ 
[oracle@onenode2 ~]$ srvctl config database -d onenode
Database unique name: onenode
Database name: onenode
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/onenode/spfileonenode.ora
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: onenode
Database instances: 
Disk Groups: DATA
Mount point paths: 
Services: one
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: onenode
Candidate servers: onenode2
Database is administrator managed
[oracle@onenode2 ~]$ 
[grid@onenode2 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.OCR.dg
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.asm
               ONLINE  ONLINE       onenode1                 Started             
               ONLINE  ONLINE       onenode2                 Started             
ora.gsd
               OFFLINE OFFLINE      onenode1                                     
               OFFLINE OFFLINE      onenode2                                     
ora.net1.network
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.ons
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
ora.registry.acfs
               ONLINE  ONLINE       onenode1                                     
               ONLINE  ONLINE       onenode2                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       onenode1                                     
ora.cvu
      1        ONLINE  ONLINE       onenode1                                     
ora.oc4j
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode.db
      2        ONLINE  ONLINE       onenode2                 Open                
ora.onenode.one.svc
      1        ONLINE  ONLINE       onenode2                                     
ora.onenode1.vip
      1        ONLINE  ONLINE       onenode1                                     
ora.onenode2.vip
      1        ONLINE  ONLINE       onenode2                                     
ora.scan1.vip
      1        ONLINE  ONLINE       onenode1                                     
[grid@onenode2 ~]$ 

end








你可能感兴趣的:(RAC)