Oracle RAC

official: 

vmware:    http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVMwareServer2.php

virtual box: http://www.oracle-base.com/articles/11g/OracleDB11gR2RACInstallationOnOEL5UsingVirtualBox.php

3rd:

http://bbs.chinaunix.net/thread-1835746-1-1.html

http://www.linuxidc.com/Linux/2011-02/31930p3.htm

http://wenku.baidu.com/view/6eac4dee102de2bd9605889e.html

http://wenku.baidu.com/view/e053d785b9d528ea81c7791c.html

http://www.linuxidc.com/Linux/2011-02/31930.htm

在RedHat EL5上install 11gr2所需的rpm包

http://space.itpub.net/15415488/viewspace-616215

http://www.juliandyke.com/rpmcheck.html

http://rpm.pbone.net/index.php3/stat/4/idpl/7604958/dir/startcom_5/com/glibc-common-2.5-24.i386.rpm.html

 

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-09-20 21:10:11: Parsing the host name
2011-09-20 21:10:11: Checking for super user privileges
2011-09-20 21:10:11: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
/u01/app/11.2.0/grid/bin/srvctl start nodeapps -n rac2 ... failed
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3039 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory

 

rac1:

Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 9c793bb457ea4f8bbf08783c2ccdab62.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   9c793bb457ea4f8bbf08783c2ccdab62 (/dev/oracleasm/disks/DISK1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-0184: Cannot communicate with the CRS daemon.

PRCR-1070 : Failed to check if resource ora.asm is registered
Cannot communicate with crsd
add asm ... failed
clsr_start_dg return error at loc: 70 rc=0
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Create failed, or completed with errors.
create diskgroup DATA ... failed
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Add failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type -basetype ora.local_resource.type -file /u01/app/11.2.0/grid/crs/template/registry.acfs.type
add ora.registry.acfs.type ... failed
PRCR-1070 : Failed to check if resource ora.net1.network is registered
Cannot communicate with crsd
add scan=rac-scan ... failed
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3039 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

 

2nd:

oracle net configuration failed.
oracle private interconnect configuration assistant failed.

 cluster verification tool failed.

 空间不够了.... 20G 现在剩余2G 需要4.7G! 删除安装文件

 

建库的时候.... pre check erros:

1 CRS Integrity - This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
  Check Failed on Nodes: [rac1] Check Succeeded On Nodes: [rac2] 
Verification result of failed node: rac1
Expected Value
 : n/a
Actual Value
 : n/a
 List of errors:
 - 
PRVF-5305 : The Oracle clusterware is not healthy on node "rac1" CRS-4535: Cannot communicate with Cluster Ready Services CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online  - Cause:  An error was found with the Oracle Clusterware on the node specified.  - Action:  Review the error reported and resolve the issue specified.

2 Node Application Existence - This test checks the existence of Node Applications on the system.
  Operation Failed on Nodes: [rac2,  rac1] 
Verification result of failed node: rac2
Expected Value
 : n/a
Actual Value
 : n/a
 List of errors:
 - 
PRVF-4556 : Failed to check existence of node application "VIP" on node "rac2"  - Cause:  Could not verify existence of the nodeapp identified on the node specified .  - Action:  Ensure that the resource specified is available on the node specified, see 'srvctl add nodeapps' for further information.
 - 
PRVF-4556 : Failed to check existence of node application "NETWORK" on node "rac2"  - Cause:  Could not verify existence of the nodeapp identified on the node specified .  - Action:  Ensure that the resource specified is available on the node specified, see 'srvctl add nodeapps' for further information.
Back to Top 
Verification result of failed node: rac1
Expected Value
 : n/a
Actual Value
 : n/a
 List of errors:
 - 
PRVF-4556 : Failed to check existence of node application "VIP" on node "rac1"  - Cause:  Could not verify existence of the nodeapp identified on the node specified .  - Action:  Ensure that the resource specified is available on the node specified, see 'srvctl add nodeapps' for further information.
 - 
PRVF-4556 : Failed to check existence of node application "NETWORK" on node "rac1"  - Cause:  Could not verify existence of the nodeapp identified on the node specified .  - Action:  Ensure that the resource specified is available on the node specified, see 'srvctl add nodeapps' for further information.
Back to Top

 

又装了一遍,还是错误 这次只运行一个root.sh了....

CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-0184: Cannot communicate with the CRS daemon.

PRCR-1070 : Failed to check if resource ora.asm is registered
Cannot communicate with crsd
add asm ... failed
clsr_start_dg return error at loc: 70 rc=0
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Create failed, or completed with errors.
create diskgroup DATA ... failed
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Add failed, or completed with errors.
Command return code of 1 (256) from command: /u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type -basetype ora.local_resource.type -file /u01/app/11.2.0/grid/crs/template/registry.acfs.type
add ora.registry.acfs.type ... failed
PRCR-1070 : Failed to check if resource ora.net1.network is registered
Cannot communicate with crsd
add scan=rac-scan ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3039 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

 

 RAC2:

CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
/u01/app/11.2.0/grid/bin/srvctl start nodeapps -n rac2 ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3039 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

 

vBOX 存在的介质,不能新建再导入,需要注册。

UUID重复问题:

http://www.nonabyte.net/how-to-copy-a-virtualbox-vdi/

C:\Program Files\Oracle\VirtualBox>VBoxManage internalcommands sethduuid G:\Inst
alled_Media_RAC\rac1\rac1.vdi
UUID changed to: 148c3137-3df0-45ca-97bc-b0378749bdb1

C:\Program Files\Oracle\VirtualBox>VBoxManage internalcommands sethduuid G:\Inst
alled_Media_RAC\rac2\rac2.vdi
UUID changed to: 890eace8-a2c2-4206-b52b-0edbff95575d

还有个parent不知道用处

这次新建了,注册不好用:(

 

还需要手工添加asm盘-vb直接改配置不用。 但是这次shareable不用再次设置了,自动检测出来了。

 

分区不过已经有了 不需要再分。

 

 

一台建立asm盘 scandisk 然后listdisks 另外一台只要scandisks和listdisks就可以。

 

ntp问题:

http://www.linuxsir.org/bbs/thread307840.html good article.

comment all, only leave , server and drift file setting.

server cn.pool.ntp.org kr.pool.ntp.org

ntpdate cn.pool.ntp.org 

暂停ntp

 

吓了一跳, 起来的太慢了, Rac1

[oracle@rac1 ~]$ cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status                 
  ------------------------------------  ------------------------
  rac1                                  passed                 
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State                  
  ------------------------------------  ------------------------
  rac1                                  Active                 
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                 
  ------------  ------------------------  ------------------------
  rac1          700.0                     passed                 

Time offset is within the specified limits on the following set of nodes:
"[rac1]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

 

配置ctss服务

http://space.itpub.net/7199859/viewspace-628439

http://www.itpub.net/thread-1423867-1-1.html

 

可以的rac说明

http://blog.csdn.net/tianlesoftware/article/details/6009962

 

管理11g rac

http://candon123.blog.51cto.com/704299/336023

又一次fail 在1上,原因:

Check CTSS state started...
Check: CTSS state
  Node Name                             State                  
  ------------------------------------  ------------------------
  rac1                                  Active                 
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status                 
  ------------  ------------------------  ------------------------
  rac1          2400.0                    failed                 
Result: PRVF-9661 : Time offset is NOT within the specified limits on the following nodes:
"[rac1]"

PRVF-9652 : Cluster Time Synchronization Services check failed

???ctss不干活? 2秒差距 不做?

手工同步了一下 继续, ntpdate cn.pool.ntp.org

 

终于可以建库了,建了6,7个小时!!!

有session log有orcle log可以看看是否继续干活.

 

慢得原因是snapshot,千万不要整snapshot,尤其在安装的时候。

停止vbox, vboxmanage snapshot rac1 delete xxxx(从配置文件中可以找到snapshot号码--) rac1.vbox.

删完了 删除了两个snapshot文件 还有个14G的在snapshot下?ad543e49-786f-4538-a03b-2a6e52265542

 

再从配置文件中找到,再继续删。

 

实验:

做个通用的实验

加节点 删节点

扩容 减容

acfs test.

failover test. resource load balance.

 

10.8 带起来非常非常慢.-->

 rac1启动半天后 才看到数据库起来 - 都没有自己一个一个启动快。

rac2  crash了一次,启动了N次才又重新起来。

 

最终的状态为:

[oracle@rac2 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    rac1       
ora....ER.lsnr ora....er.type ONLINE    ONLINE    rac1       
ora....N1.lsnr ora....er.type ONLINE    ONLINE    rac1       
ora.asm        ora.asm.type   ONLINE    ONLINE    rac1       
ora.eons       ora.eons.type  ONLINE    ONLINE    rac1       
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              
ora....network ora....rk.type ONLINE    ONLINE    rac1       
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              
ora.ons        ora.ons.type   ONLINE    UNKNOWN   rac2       
ora.rac.db     ora....se.type ONLINE    ONLINE    rac1       
ora....SM1.asm application    ONLINE    ONLINE    rac1       
ora....C1.lsnr application    ONLINE    ONLINE    rac1       
ora.rac1.gsd   application    OFFLINE   OFFLINE              
ora.rac1.ons   application    ONLINE    UNKNOWN   rac1       
ora.rac1.vip   ora....t1.type ONLINE    ONLINE    rac1       
ora....SM2.asm application    ONLINE    ONLINE    rac2       
ora....C2.lsnr application    ONLINE    ONLINE    rac2       
ora.rac2.gsd   application    OFFLINE   OFFLINE              
ora.rac2.ons   application    ONLINE    UNKNOWN   rac2       
ora.rac2.vip   ora....t1.type ONLINE    ONLINE    rac2       
ora....ry.acfs ora....fs.type ONLINE    ONLINE    rac1       
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    rac1       

 

rac2在rac1退出后又自动crash了,启动时停留在asm disk scan那儿.. 然后crash. 仍然做下rac1启动后(完全) 观察下rac2的情况。

 

CPU不行 做实验就先启用一个吧 100%啥也干不了.

 

acfs: (asmca实际上用这个工具做的实验)

1)vol(略)

2)

Create ACFS Command:

/sbin/mkfs -t acfs /dev/asm/testv1-409


Register MountPoint Command:

/sbin/acfsutil registry -a -f /dev/asm/testv1-409 /u01/app/oracle/acfsmounts/data_testv1

 

你可能感兴趣的:(oracle-HA)