Oracle rac 10g for linux vmware安装笔记

笔记未加整理,仅供参考,有不明白的朋友可以一起探讨。呵呵
一、安装主机网卡
主机为dell Optiplex 780,预装版为ubuntu,将其格式为windows 2003 server版之后,发现不能识别网卡,下载网卡驱动将其打上,驱动地址为
http://ftp.us.dell.com/network/R197373.exe

单节点配置虚拟机
二、安装虚拟机
此处虚拟机选用VMwareServerv1.0.5Build80187.exe
安装vmtool
点击vm选择install vmware tools==>install
系统提示警告:
引用
WARNING:You cannot install the VMware Tools package until the guest operating system is running.If your guest operating system is not running,choose Cancel
and install the VMware tools package later.

解决办法:
理论上应该会自动挂载vmware tools光盘,但根本没有出现,于是采用手动挂载
将虚拟机光驱指向linux.iso
引用
[root@mcrac1 ~]# mkdir -p /mnt/cdrom
[root@mcrac1 ~]# mount /dev/cdrom /mnt/cdrom
mount: block device /dev/cdrom is write-protected, mounting read-only

双机桌面图标vmware tools,进入之后双击*.rpm
引用
[root@mcrac1 cdrom]# cp  VMwareTools-1.0.5-80187.tar.gz /tmp
[root@mcrac1 cdrom]# tar zxpf VMwareTools-1.0.5-80187.tar.gz

虚拟机窗口执行
引用
[root@mcrac1 vmware-tools-distrib]# ./vmware-install.pl


三、同步主机和linux系统时间(Synchronize Guest OS time with Host OS)
虚拟机窗口root用户执行
引用
#vmware-toolbox

会在启动文件中看到
引用
tools.syncTime = "TRUE"

编辑/boot/grub/grub.conf
引用
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux AS (2.6.9-55.ELsmp)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.9-55.ELsmp ro root=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapic
        initrd /boot/initrd-2.6.9-55.ELsmp.img
title Red Hat Enterprise Linux AS-up (2.6.9-55.EL)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.9-55.EL ro root=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapic
        initrd /boot/initrd-2.6.9-55.EL.img

重启主机
引用
#reboot


启用网卡时出现错误:
引用
Device eth0 has different MAC address than expected, ignoring

解决办法:
注释掉/etc/sysconfig/network-scripts/ifcfg-eth0中HWADDR=xx:xx:xx:xx:xx:xx这一行。

四、配置oracle用户
引用
[root@mcrac1 asm]# groupadd dba
[root@mcrac1 asm]# groupadd oinstall
[root@mcrac1 asm]# useradd -m -g oinstall -G dba oracle
[root@mcrac1 asm]# id oracle
uid=501(oracle) gid=502(oinstall) groups=502(oinstall),501(dba)
[root@mcrac1 asm]# passwd oracle
[root@mcrac1 /]# chown -R oracle:dba /oracle
[root@mcrac1 /]# chmod -R 775 /oracle


oracle用户环境变量配置:
引用
export ORACLE_SID=dbrac1
export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_2
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022


五、安装asm包
下载和内核相关的asm rpm包,其网址为
http://www.oracle.com/technology/tech/linux/asmlib/index.html
linux内核版本为
引用
[root@mcrac1 ~]# uname -a
Linux mcrac1 2.6.9-55.ELsmp #1 SMP Fri Apr 20 17:03:35 EDT 2007 i686 i686 i386 GNU/Linux

安装asm lib相关包

引用
[root@mcrac1 asm]# rpm -Uvh oracleasm-support-2.1.3-1.el4.i386.rpm
warning: oracleasm-support-2.1.3-1.el4.i386.rpm: V3 DSA signature: NOKEY, key ID b38a8516
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [100%]
[root@mcrac1 asm]# rpm -ivh oracleasmlib-2.0.4-1.el4.i386.rpm
warning: oracleasmlib-2.0.4-1.el4.i386.rpm: V3 DSA signature: NOKEY, key ID b38a8516
error: Failed dependencies:
        oracleasm >= 1.0.4 is needed by oracleasmlib-2.0.4-1.el4.i386
[root@mcrac1 asm]# rpm -ivh oracleasm-2.6.9-55.ELsmp-2.0.3-1.i686.rpm
Preparing...                ########################################### [100%]
   1:oracleasm-2.6.9-55.ELsm########################################### [100%]
[root@mcrac1 asm]# rpm -ivh oracleasmlib-2.0.4-1.el4.i386.rpm
warning: oracleasmlib-2.0.4-1.el4.i386.rpm: V3 DSA signature: NOKEY, key ID b38a8516
Preparing...                ########################################### [100%]
   1:oracleasmlib           ########################################### [100%]

[root@mcrac1 asm]# rpm -qa|grep oracleasm
oracleasm-support-2.1.3-1.el4
oracleasmlib-2.0.4-1.el4
oracleasm-2.6.9-55.ELsmp-2.0.3-1



六、配置系统文件
引用
[root@mcrac1 /]# cat /etc/security/limits.conf
# End of file
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536


/etc/pam.d/login添加
引用
session required pam_limits.so


/etc/profile添加
引用
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi


/etc/modprobe.conf添加
引用
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

引用
[root@mcrac1 ~]# modprobe -v hangcheck-timer
insmod /lib/modules/2.6.9-55.ELsmp/kernel/drivers/char/hangcheck-timer.ko


/etc/sysctl.conf添加
引用
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 262144
net.core.wmem_max = 262144


引用
[root@mcrac1 ~]# sysctl -p


配置hosts文件
引用
[root@mcrac1 ~]# more /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost
172.16.4.81      mcrac1
172.16.4.82      mcrac1_vip
192.168.0.10     mcrac1_priv
172.16.4.83      mcrac2
172.16.4.84      mcrac2_vip
192.168.0.11     mcrac2_priv



七、检查系统包
引用
[root@mcrac1 /]# rpm -qa|grep libaio
libaio-devel-0.3.105-2
libaio-0.3.105-2
[root@mcrac1 /]# rpm -qa|grep openmotif21
openmotif21-2.1.30-11.RHEL4.6


八、配置共享磁盘
引用
[root@mcrac1 /]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [  OK  ]
Scanning the system for Oracle ASMLib disks: [  OK  ]

创建asm磁盘出现以下问题:
引用
[root@mcrac1 ~]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
Marking disk "VOL1" as an ASM disk: [FAILED]
[root@mcrac1 ~]# tail -f /var/log/oracleasm
Writing disk header: oracleasm-write-label: Unable to clear device "/dev/sdc1": No space left on device
failed
Unable to label device "/dev/sdc1"
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Disk "VOL1" does not exist or is not instantiated
Writing disk header: oracleasm-write-label: Unable to clear device "/dev/sdc1": No space left on device
failed
Unable to label device "/dev/sdc1"

解决办法:
重新划分pdisk将扩展分区变为主分区
引用
[root@mcrac1 raw]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
Marking disk "VOL1" as an ASM disk: [  OK  ]
[root@mcrac1 raw]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1
Marking disk "VOL2" as an ASM disk: [  OK  ]
[root@mcrac1 raw]# /etc/init.d/oracleasm createdisk VOL3 /dev/sde1
Marking disk "VOL3" as an ASM disk: [  OK  ]




可以利用 /etc/init.d/oracleasm 的 'enable' 和 'disable' 选项来启用或禁用自动启动。


配置raw设备
引用
[root@mcrac1 ~]# cat /etc/sysconfig/rawdevices
# This file and interface are deprecated.
# Applications needing raw device access should open regular
# block devices with O_DIRECT.
# raw device bindings
# format:  <rawdev> <major> <minor>
#          <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
#          /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdc1
/dev/raw/raw2 /dev/sdd1
/dev/raw/raw3 /dev/sde1

[root@mcrac1 ~]# /sbin/service rawdevices restart
Assigning devices:
           /dev/raw/raw1  -->   /dev/sdc1
/dev/raw/raw1:  bound to major 8, minor 32
           /dev/raw/raw2  -->   /dev/sdd1
/dev/raw/raw2:  bound to major 8, minor 48
           /dev/raw/raw3  -->   /dev/sde1
/dev/raw/raw3:  bound to major 8, minor 64
done


[root@mcrac1 raw]# chown oracle:dba *
[root@mcrac1 raw]# ls -rtl
total 0
crw-rw----  1 oracle dba 162, 1 Mar 30 12:25 raw1
crw-rw----  1 oracle dba 162, 2 Mar 30 12:25 raw2
crw-rw----  1 oracle dba 162, 3 Mar 30 12:25 raw3



九、拷贝rac1文件夹下虚拟机文件至rac2文件夹下修改虚拟机启动文件
引用
displayname='rac2'

并在两个虚拟机启动文件中添加如下参数
引用
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
scsi1.sharedBus = "virtual"
scsi1:0.deviceType = "disk"
scsi1:1.deviceType = "disk"
scsi1:2.deviceType = "disk"
scsi1:3.deviceType = "disk"

启动rac2虚拟机
修改ip地址
即时生效:
引用
# ifconfig eth0 172.16.4.83 netmask 255.255.255.0
# ifconfig eth1 192.168.0.11 netmask 255.255.255.0

启动生效:
修改/etc/sysconfig/network-scripts/ifcfg-eth0
修改/etc/sysconfig/network-scripts/ifcfg-eth1

修改host name
即时生效:
引用
# hostname rac2

启动生效:
修改/etc/sysconfig/network

十、配置ssh
需要root和oracle用户双节点互通,注意自身用户也需通
十一、配置时间同步
mcrac1启动服务
引用
#chkconfig time on 

mcrac2 crontab中配置
引用
[root@mcrac2 ~]# crontab -l
*/1 * * * * rdate -s 172.16.4.81


十二、配置ocfs2
下载网址为
http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL4/i386/1.2.7-1/
http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL4/i386/1.2.9-1/2.6.9-55.EL/

双节点安装
引用
[root@mcrac2 ocfs2]# rpm -ivh ocfs2-tools-1.2.7-1.el4.i386.rpm
Preparing...                ########################################### [100%]
   1:ocfs2-tools            ########################################### [100%]
[root@mcrac2 ocfs2]#  rpm -ivh ocfs2console-1.2.7-1.el4.i386.rpm
Preparing...                ########################################### [100%]
   1:ocfs2console           ########################################### [100%]
[root@mcrac2 ocfs2]# rpm -ivh ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4.i686.rpm
Preparing...                ########################################### [100%]
   1:ocfs2-2.6.9-55.ELsmp   ########################################### [100%]


引用
[root@mcrac1 ocfs2]# rpm -qa|grep ocfs
ocfs2-2.6.9-55.ELsmp-1.2.9-1.el4
ocfs2-tools-1.2.7-1.el4
ocfs2console-1.2.7-1.el4


要禁用 SELinux,运行“Security Level Configuration”GUI 实用程序:

引用
# /usr/bin/system-config-securitylevel &


现在,单击 SELinux 选项卡并取消选中“Enabled”复选框。单击 [OK] 后,将显示一个警告对话框。只需单击“Yes”确认该警告。禁用 SELinux 选项,

在集群中的两个节点上进行此更改后,将需要重新引导每个节点以实施更改:在继续配置 OCFS2 之前,必须禁用 SELinux!

引用
# init 6
重启主机

引用
# ocfs2console


Configure Nodes --> Add --> 输入NODE名和IP --> OK --> Apply
出现如下错误:
引用
    o2cb_ctl: Unable to access cluster service while creating node
        Could not add node node1


解决办法:


将/etc/ocfs2/下的不正确的cluster.conf文件删掉,重新用ocfs2console 配置

双节点cluster.conf文件显示
引用
[root@mcrac1 ocfs2]# more cluster.conf
node:
        ip_port = 7777
        ip_address = 172.16.4.81
        number = 0
        name = mcrac1
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 172.16.4.83
        number = 1
        name = mcrac2
        cluster = ocfs2

cluster:
        node_count = 2
        name = ocfs2


双节点执行
引用
[root@mcrac1 ocfs2]# /etc/init.d/o2cb unload
Stopping O2CB cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK

引用
[root@mcrac1 ocfs2]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:           
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK


单节点格式化OCFS2
引用
[root@mcrac1 ocfs2]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L  crsfile /dev/sdb1
mkfs.ocfs2 1.2.7
Filesystem label=crsfile
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=2146762752 (65514 clusters) (524112 blocks)
3 cluster groups (tail covers 1002 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 1 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful


引用
[root@mcrac1 ocfs2]# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs2


二节点mount
引用
root@mcrac2 ocfs2]# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs2
ocfs2_hb_ctl: Bad magic number in inode while reading uuid
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
[root@mcrac2 ocfs2]# mounted.ocfs2 -f
Device                FS     Nodes
/dev/sdb1             ocfs2  Unknown: Bad magic number in inode 


解决办法:
经过检查发现,这个问题是由于在为虚拟机创建磁盘的时候没有选择“allocate all disk space now”,导致ocfs在加入第二个节点时出错,关掉虚拟机删除该磁盘,重新创建一个立即分配空间的盘再格式化,两个节点都能正常mount了。

双节点编辑fstab
引用
/dev/sdb1              /ocfs2 ocfs2 _netdev,datavolume,nointr 0 0


十三、安装crs软件:
双节点执行
引用
[oracle@mcrac1 logs]$ su
Password
/oracle/app/oraInventory/orainstRoot.sh

注意二节点不要在远程会话终端执行,需要在虚拟机上执行
引用
/oracle/app/product/10.2.0/db_1/root.sh

The root.sh script on rac2 invoked the VIPCA automatically but it failed with the error "The
given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs."
As you are using a non-routable IP address (192.168.x.x) for the public interface, the Oracle
Cluster Verification Utility (CVU) could not find a suitable public interface. A workaround is to
run VIPCA manually.

在vipca配置栏中根据/etc/hosts来配置
执行后期检查
引用
[oracle@mcrac2 ~]$ cluvfy stage -post crsinst -n mcrac1,mcrac2

Performing post-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "mcrac2".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking Cluster manager integrity...


Checking CSS daemon...
Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...

Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.

Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...

Checking daemon liveness...
Liveness check passed for "CRS daemon".

Checking daemon liveness...
Liveness check passed for "CSS daemon".

Checking daemon liveness...
Liveness check passed for "EVM daemon".

Checking CRS health...
CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application (required)
Check passed.

Checking existence of ONS node application (optional)
Check passed.

Checking existence of GSD node application (optional)
Check passed.


Post-check for cluster services setup was successful.


引用
[oracle@mcrac2 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.mcrac1.gsd application    ONLINE    ONLINE    mcrac1     
ora.mcrac1.ons application    ONLINE    ONLINE    mcrac1     
ora.mcrac1.vip application    ONLINE    ONLINE    mcrac1     
ora.mcrac2.gsd application    ONLINE    ONLINE    mcrac2     
ora.mcrac2.ons application    ONLINE    ONLINE    mcrac2     
ora.mcrac2.vip application    ONLINE    ONLINE    mcrac2


十四、安装数据库
二节点启动ASM实例报错
引用
SQL> startup
ASM instance started

Total System Global Area   92274688 bytes
Fixed Size                  1217884 bytes
Variable Size              65890980 bytes
ASM Cache                  25165824 bytes
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficientnumber of disks for diskgroup
"DATADG

后台alert日志显示
引用
Wed Mar 31 10:30:39 2010
SQL> ALTER DISKGROUP ALL MOUNT
Wed Mar 31 10:30:39 2010
NOTE: cache registered group DATADG number=1 incarn=0xc102e171
Wed Mar 31 10:30:39 2010
Loaded ASM Library - Generic Linux, version 2.0.4 (KABI_V2) library for asmlib interface
Wed Mar 31 10:30:39 2010
ORA-15186: ASMLIB error function = [asm_open],  error = [1],  mesg = [Operation not permitted]
Wed Mar 31 10:30:39 2010
ORA-15186: ASMLIB error function = [asm_open],  error = [1],  mesg = [Operation not permitted]
Wed Mar 31 10:30:39 2010
ORA-15186: ASMLIB error function = [asm_open],  error = [1],  mesg = [Operation not permitted]
Wed Mar 31 10:30:39 2010
ERROR: no PST quorum in group 1: required 1, found 0
Wed Mar 31 10:30:39 2010
NOTE: cache dismounting group 1/0xC102E171 (DATADG)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DATADG was not mounted

解决办法:
init+ASM2节点添加
引用
asm_diskstring=ORCL:*

添加之后,后台alert显示
引用
Wed Mar 31 11:01:23 2010
SQL> ALTER DISKGROUP ALL MOUNT
Wed Mar 31 11:01:23 2010
NOTE: cache registered group DATADG number=1 incarn=0xc105b157
Wed Mar 31 11:01:23 2010
Loaded ASM Library - Generic Linux, version 2.0.4 (KABI_V2) library for asmlib interface
Wed Mar 31 11:01:23 2010
ERROR: no PST quorum in group 1: required 2, found 0
Wed Mar 31 11:01:23 2010
NOTE: cache dismounting group 1/0xC105B157 (DATADG)
NOTE: dbwr not being msg'd to dismount
ERROR: diskgroup DATADG was not mounted

解决办法:
重启主机,怪异!
安装完毕之后,rac状态显示
引用
[oracle@mcrac2 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.dbrac.db   application    ONLINE    ONLINE    mcrac1     
ora....c1.inst application    ONLINE    ONLINE    mcrac1     
ora....c2.inst application    ONLINE    ONLINE    mcrac2     
ora....SM1.asm application    ONLINE    ONLINE    mcrac1     
ora....C1.lsnr application    ONLINE    ONLINE    mcrac1     
ora.mcrac1.gsd application    ONLINE    ONLINE    mcrac1     
ora.mcrac1.ons application    ONLINE    ONLINE    mcrac1     
ora.mcrac1.vip application    ONLINE    ONLINE    mcrac1     
ora....SM2.asm application    ONLINE    ONLINE    mcrac2     
ora....C2.lsnr application    ONLINE    ONLINE    mcrac2     
ora.mcrac2.gsd application    ONLINE    ONLINE    mcrac2     
ora.mcrac2.ons application    ONLINE    ONLINE    mcrac2     
ora.mcrac2.vip application    ONLINE    ONLINE    mcrac2    






你可能感兴趣的:(vmware,oracle,虚拟机,linux,redhat)