-
精华贴数
-
1
-
技术积分
-
844
-
社区积分
-
30
-
注册时间
-
2007-4-28
-
论坛徽章:
-
2
|
电梯直达
1#
发表于 2010-10-27 09:33:35
| 只看该作者
| 倒序浏览
作者:mfkqwyc86 QQ:113257174 itpub空间:http://space.itpub.net/9664900 本文已整理成PDF格式,详见第8楼。 【Oracle RAC】Linux + Oracle 11g R2 RAC 安装配置详细过程.pdf 环境: Oracle Linux AS 5.5两台 Oracle 11g R2 1、IP规划 127.0.0.1 localhost.localdomain localhost #public ip 192.168.10.211 rac1 192.168.10.212 rac2 #priv ip 10.10.10.211 rac1prv 10.10.10.212 rac2prv #vip ip 192.168.10.213 rac1vip 192.168.10.214 rac2vip #scan ip 192.168.10.215 racscan 2、磁盘规划 +CRS 三个2G的盘 +DGDATA 三个10G的盘 +DGRECOVERY 两个5G的盘 /usr/sbin/groupadd -g 501 oinstall /usr/sbin/groupadd -g 502 dba /usr/sbin/groupadd -g 503 oper /usr/sbin/groupadd -g 504 asmadmin /usr/sbin/groupadd -g 505 asmoper /usr/sbin/groupadd -g 506 asmdba /usr/sbin/useradd -g oinstall -G dba,asmdba,oper oracle /usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid [root@ora1 ~]# id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper),506(asmdba) [root@ora1 ~]# id grid uid=502(grid) gid=501(oinstall) groups=501(oinstall),502(dba),503(oper),504(asmadmin),505(asmoper),506(asmdba) mkdir /oracle/app/ chown -R grid:oinstall /oracle/app/ chmod -R 775 /oracle/app/ mkdir -p /oracle/app/oraInventory chown -R grid:oinstall /oracle/app/oraInventory chmod -R 775 /oracle/app/oraInventory mkdir -p /oracle/app/grid mkdir -p /oracle/app/oracle chown -R grid:oinstall /oracle/app/grid chown -R oracle:oinstall /oracle/app/oracle chmod -R 775 /oracle/app/oracle chmod -R 775 /oracle/app/grid passwd grid passwd oracle 2、操作系统版本: [root@rac1 ~]# lsb_release -a LSB Version: :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch Distributor ID: EnterpriseEnterpriseServer Description: Enterprise Linux Enterprise Linux Server release 5.5 (Carthage) Release: 5.5 Codename: Carthage [root@rac1 ~]# uname -a Linux rac1 2.6.18-194.el5 #1 SMP Mon Mar 29 20:06:41 EDT 2010 i686 i686 i386 GNU/Linux [root@rac1 ~]# 修改系统参数: vi /etc/security/limits.conf #ORACLE SETTING grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 vi /etc/pam.d/login #ORACLE SETTING session required pam_limits.so # vi /etc/sysctl.conf #ORACLE SETTING fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 gird时间同步所需要的设置(11gR2新增检查项) #Network Time Protocol Setting /sbin/service ntpd stop chkconfig ntpd off rm /etc/ntp.conf mv /etc/ntp.conf to /etc/ntp.conf.org /dev/shm 共享内存不足的处理 解决方法: 例如:为了将/dev/shm的大小增加到1GB,修改/etc/fstab的这行:默认的: none /dev/shm tmpfs defaults 0 0 改成: none /dev/shm tmpfs defaults,size=1024m 0 0 size参数也可以用G作单位:size=1G。 重新mount /dev/shm使之生效: # mount -o remount /dev/shm 或者: # umount /dev/shm # mount -a 马上可以用"df -h"命令检查变化。 修改gird、oracle用户的.bash_profile文件: #grid 用户配置文件 ORACLE_HOSTNAME请自行设置 TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_SID=+ASM1; export ORACLE_SID ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE ORACLE_HOME=/oracle/app/grid/product/11.2.0; export ORACLE_HOME NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT THREADS_FLAG=native; export THREADS_FLAG PATH=$ORACLE_HOME/bin:$PATH; export PATH THREADS_FLAG=native; export THREADS_FLAG PATH=$ORACLE_HOME/bin:$PATH; export PATH if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi #oracle用户配置文件 ORACLE_HOSTNAME请自行设置 # Oracle Settings oracle TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_BASE=/oracle/app/oracle; export ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/11.2.0; export ORACLE_HOME ORACLE_SID=racdb1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM PATH=/usr/sbin:$PATH; export PATH PATH=$ORACLE_HOME/bin:$PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMAT NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi 配置信任关系 设置SSH, 1).在主节点RAC1上以grid,oracle用户身份生成用户的公匙和私匙 # ping rac2-eth0 # ping rac2-eth1 # su - oracle $ mkdir ~/.ssh $ ssh-keygen -t rsa $ ssh-keygen -t dsa 2).在副节点RAC2、RAC3上执行相同的操作,确保通信无阻 # ping rac1-eth0 # ping rac1-eth1 # su - oracle $ mkdir ~/.ssh $ ssh-keygen -t rsa $ ssh-keygen -t dsa 3).在主节点RAC1上oracle用户执行以下操作 $ cat ~/.ssh/id_rsa.pub >> ./.ssh/authorized_keys $ cat ~/.ssh/id_dsa.pub >> ./.ssh/authorized_keys $ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys $ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys 4).主节点RAC1上执行检验操作 $ ssh rac1 date $ ssh rac2 date $ ssh rac3 date $ ssh rac1priv date $ ssh rac2priv date $ ssh rac3priv date 5).在副节点RAC2上执行检验操作 $ ssh rac1 date $ ssh rac2 date $ ssh rac3 date $ ssh rac1priv date $ ssh rac2priv date $ ssh rac3priv date 安装ASM oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm oracleasmlib-2.0.4-1.el5.i386.rpm oracleasm-support-2.1.3-1.el5.i386.rpm 格式化硬盘 Disk /dev/sdd: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 261 2096451 83 Linux Disk /dev/sde: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sde1 1 261 2096451 83 Linux Disk /dev/sdf: 2147 MB, 2147483648 bytes 255 heads, 63 sectors/track, 261 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdf1 1 261 2096451 83 Linux Disk /dev/sdg: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdg1 1 1305 10482381 83 Linux Disk /dev/sdh: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdh1 1 1305 10482381 83 Linux Disk /dev/sdi: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdi1 1 1305 10482381 83 Linux Disk /dev/sdj: 5368 MB, 5368709120 bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdj1 1 652 5237158+ 83 Linux Disk /dev/sdk: 5368 MB, 5368709120 bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdk1 1 652 5237158+ 83 Linux 配置ASM [root@ora1 ~]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] 创建asm盘 +CRS 三个2G的盘 +DGDATA 三个10G的盘 +DGRECOVERY 两个5G的盘 /dev/sdd: 2097152 /dev/sde: 2097152 /dev/sdf: 2097152 /dev/sdg: 10485760 /dev/sdh: 10485760 /dev/sdi: 10485760 /dev/sdj: 5242880 /dev/sdk: 5242880 [root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS1 /dev/sdd1 Marking disk "CRS1" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS2 /dev/sde1 Marking disk "CRS2" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS3 /dev/sdf1 Marking disk "CRS3" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk DATA1 /dev/sdg1 Marking disk "DATA1" as an ASM disk: [FAILED] [root@ora2 asm]# /etc/init.d/oracleasm createdisk DATA1 /dev/sdg1 Marking disk "DATA1" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk DATA2 /dev/sdh1 Marking disk "DATA2" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk DATA3 /dev/sdi1 Marking disk "DATA3" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk REC1 /dev/sdj1 Marking disk "REC1" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm createdisk REC2 /dev/sdk1 Marking disk "REC2" as an ASM disk: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm scandisks Scanning the system for Oracle ASMLib disks: [ OK ] [root@ora2 asm]# /etc/init.d/oracleasm listdisks CRS1 CRS2 CRS3 DATA1 DATA2 DATA3 REC1 REC2 安装cvuqdisk包并验证 在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。如果没有 cvuqdisk,集群验证实用程序就无法发现共享磁盘,当运行(手动运行或在 Oracle Grid Infrastructure 安装结束时自动运行)集群验证实用程序时,您会收到这样的错误消息:“Package cvuqdisk not installed”。使用适用于您的硬件体系结构(例如,x86_64 或 i386)的 cvuqdisk RPM。 cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。 设置环境变量 CVUQDISK_GRP,使其指向作为 cvuqdisk 的所有者所在的组(本文为 oinstall): export CVUQDISK_GRP=oinstall 使用 CVU 验证是否满足 Oracle 集群件要求 记住要作为 grid 用户在将要执行 Oracle 安装的节点 (racnode1) 上运行。此外,必须为 grid 用户配置通过用户等效性实现的 SSH 连通性。 在grid软件目录里运行以下命令: ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose [grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose Performing pre-checks for cluster services setup Checking node reachability... 检查过程省略... 使用 CVU 验证硬件和操作系统设置 ./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose [grid@rac1 grid]$ ./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose Performing post-checks for hardware and operating system setup Checking node reachability... 检查过程省略... 6、 su - grid ./runInstaller scan配置: cluster scan: sanclusters scanname: racscan scanport: 1521 /oracle/app/oraInventory/orainstRoot.sh [root@rac1 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. [root@rac2 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. /oracle/app/grid/product/11.2.0/root.sh [root@rac1 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. [root@rac1 soft]# /oracle/app/grid/product/11.2.0/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/grid/product/11.2.0 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2010-07-28 16:19:03: Parsing the host name 2010-07-28 16:19:03: Checking for super user privileges 2010-07-28 16:19:03: User has super user privileges Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded ASM created and started successfully. DiskGroup CRS created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-2672: Attempting to start 'ora.crsd' on 'rac1' CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded CRS-4256: Updating the profile Successful addition of voting disk a81aaf52b2b74ff5bf7a773e7966ea7c. Successfully replaced voting disk group with +CRS. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE a81aaf52b2b74ff5bf7a773e7966ea7c (ORCL:CRS1) [CRS] Located 1 voting disk(s). CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1' CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac1' CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'rac1' CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1' CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1' CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded rac1 2010/07/28 16:31:27 /oracle/app/grid/product/11.2.0/cdata/rac1/backup_20100728_163127.olr Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 971 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /oracle/app/oraInventory 'UpdateNodeList' was successful. [root@rac2 soft]# /oracle/app/grid/product/11.2.0/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/grid/product/11.2.0 Enter the full pathname of the local bin directory: [/usr/local/bin]: The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2010-08-02 14:32:28: Parsing the host name 2010-08-02 14:32:28: Checking for super user privileges 2010-08-02 14:32:28: User has super user privileges Using configuration parameter file: /oracle/app/grid/product/11.2.0/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2' CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac2' CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2' CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2' CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac2' CRS-2672: Attempting to start 'ora.diskmon' on 'rac2' CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'rac2' CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac2' CRS-2676: Start of 'ora.drivers.acfs' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac2' CRS-2676: Start of 'ora.asm' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac2' CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'rac2' CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded rac2 2010/08/02 14:37:51 /oracle/app/grid/product/11.2.0/cdata/rac2/backup_20100802_143751.olr Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 1202 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /oracle/app/oraInventory 'UpdateNodeList' was successful. [grid@rac2 ~]$ srvctl enable oc4j PRKO-2116 : OC4J is already enabled [grid@rac2 ~]$ srvctl start oc4j [grid@rac2 ~]$ srvctl enable nodeapps PRKO-2415 : VIP is already enabled on node(s): rac1,rac2 PRKO-2416 : Network resource is already enabled. PRKO-2417 : ONS is already enabled on node(s): rac1,rac2 PRKO-2418 : eONS is already enabled on node(s): rac1,rac2 [grid@rac2 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.CRS.dg ora....up.type ONLINE ONLINE rac1 ora....ER.lsnr ora....er.type ONLINE ONLINE rac1 ora....N1.lsnr ora....er.type ONLINE ONLINE rac1 ora.asm ora.asm.type ONLINE ONLINE rac1 ora.eons ora.eons.type ONLINE ONLINE rac1 ora.gsd ora.gsd.type ONLINE ONLINE rac1 ora....network ora....rk.type ONLINE ONLINE rac1 ora.oc4j ora.oc4j.type ONLINE ONLINE rac2 ora.ons ora.ons.type ONLINE ONLINE rac1 ora....SM1.asm application ONLINE ONLINE rac1 ora....C1.lsnr application ONLINE ONLINE rac1 ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip ora....t1.type ONLINE ONLINE rac1 ora....SM2.asm application ONLINE ONLINE rac2 ora....C2.lsnr application ONLINE ONLINE rac2 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip ora....t1.type ONLINE ONLINE rac2 ora....ry.acfs ora....fs.type ONLINE ONLINE rac1 ora.scan1.vip ora....ip.type ONLINE ONLINE rac1 6、 su - grid ./runInstaller scan配置: cluster scan: sanclusters scanname: racscan scanport: 1521 /oracle/app/oraInventory/orainstRoot.sh [root@rac1 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. [root@rac2 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. /oracle/app/grid/product/11.2.0/root.sh [root@rac1 soft]# /oracle/app/oraInventory/orainstRoot.sh Changing permissions of /oracle/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /oracle/app/oraInventory to oinstall. The execution of the script. is complete. [root@rac1 soft]# /oracle/app/grid/product/11.2.0/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /oracle/app/grid/product/11.2.0 Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... [ 本帖最后由 mfkqwyc86 于 2010-10-28 07:53 编辑 ] |
Oracle, 实验, Linux, 安装, 配置
|