环境:
在64位RHEL 下安装virtualbox,并创建rac1.ad.com 和rac2.ad.com 主机,所以的都是使用64位版本
Oracle:11.2.0.3 64bit
0:设置时间同步
服务端设置:
[root@ test]# vim /etc/xinetd.d/time-dgram
disable = no
[root@ test]# vim /etc/xinetd.d/time-stream
disable = no
/etc/init.d/xinetd restart
--重启后,查看tcp和udp的37端口都会开放
客户端:
# vim sync_time_ss.sh --编写脚本
#!/bin/bash
while :; do rdate -s 10.13.12.21; sleep 10; done #每10秒钟同步一次时间
# sh sync_time_ss.sh & --执行
[root@rac2 ~]# crontab -e
* * * * * rdate -s 10.13.12.21 --每1分钟同步一次时间
一:设置图形化界面连接
1: [root@master ]# vim /etc/sysconfig/vncservers 添加如下两行:
VNCSERVERS="89:oracle"
--配置oracle的桌面是89,如果要设置多用户则:VNCSERVERS="89:oracle 90:root"
VNCSERVERARGS[2]="-geometry 800x600 -nolisten tcp -nohttpd -localhost"
--如果不做以上修改,/etc/init.d/vncserver restart 会报:no displays configured
2:[oracle@master ~]$ vncserver :89
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
You will require a password to access your desktops.
Password: --设置vnc的访问密码
Verify:
Passwords don't match - try again
Password:
Verify:
xauth: creating new authority file /home/oracle/.Xauthority
New 'master.wonder.com:89 (oracle)' desktop is master.wonder.com:89
Creating default startup script /home/oracle/.vnc/xstartup
Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/master.wonder.com:89.log
3:配置oracle桌面89
[oracle@master ]$ vim /home/oracle/.vnc/xstartup
#!/bin/sh
# Uncomment the following two lines for normal desktop:
unset SESSION_MANAGER --去掉前面的#
exec /etc/X11/xinit/xinitrc --去掉前面的#
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
gnome-session & --删除twn &,添加此行
5: [oracle@master ]$ vncpasswd --设置访问密码,和上面的设置密码相同即可
6: [root@master ]# /etc/init.d/vncserver restart --重启服务才生效
7:下载一个vnc客户端,访问 ip:89 或者 ip:90 即可,ip为公网IP也可以
8:如果连接后界面显示灰色,则检查/home/oracle/.vnc/master.wonder.com:89.log日志,如果发现提示gnome-session 命令找不到,
yum install gnome-session 即可. 连接后,在终端选择 Bitstream Vera Sans Mono字体,Roman样式,相当好看!!!!
注意:以上执行命令时使用的用户名,涉及权限问题。
二: 在官网下载for oracle linux 5.0的VirtualBox-4.2-4.2.12_84980_el5-1.x86_64.rpm ,
rpm -ivh VirtualBox-4.2-4.2.12_84980_el5-1.x86_64.rpm 即可安装成功
# virtualbox 运行图形化界面
三:下载RHEL5.8 ,以及安装,注意更改存放系统的路径,已经clone时的默认存放路径,在 file -- preferences 修改
Redhat_Linux_v5.8.X86_64.iso下载连接地址:
链接:http://pan.baidu.com/share/link?shareid=1330252349&uk=2820576599 密码:yigh
四:配置yum源
# umount /dev/hdc --因为LINUX自动挂载的名称里面包含空格,所以配置yum源时报错,所以下面重新挂载
# mkdir /mnt/cdrom
# mount /dev/hdc /mnt/cdrom
# vim local_cdrom.repo
[Cluster]
name=Red Hat Enterprise Cluster
baseurl=file:///mnt/cdrom/Cluster
enabled=1
gpgcheck=0
[ClusterStorage]
name=Red Hat Enterprise ClusterStorage
baseurl=file:///mnt/cdrom/ClusterStorage
enabled=1
gpgcheck=0
[Server]
name=Red Hat Enterprise Server
baseurl=file:///mnt/cdrom/Server
enabled=1
gpgcheck=0
[VT]
name=Red Hat Enterprise Linux VT
baseurl=file:///mnt/cdrom/VT
enabled=1
gpgcheck=0
五:解决依赖性:
# yum install libXp* libaio* gcc-* gcc-c++-* make-* setarch-* unixODBC* compat-libstdc* sysstat -y
六:创建用户和组
# groupadd -g 5000 asmadmin
# groupadd -g 5001 asmdba
# groupadd -g 5002 asmoper
# groupadd -g 6000 oinstall
# groupadd -g 6001 dba
# groupadd -g 6002 oper
# useradd/usermod -u 1000 -g oinstall -G asmadmin,asmdba,asmoper grid
# useradd -u 1001 -g oinstall -G dba,asmdba oracle
七:创建目录和给正确的权限
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/11.2.0.3/grid
# chown -R grid:oinstall /u01
# mkdir /u01/app/oracle/
# chown -R oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01
八:设置环境变量
grid环境变量:
grid $ vim .bash_profile
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0.3/grid
export ORACLE_SID=+ASM1 --node2节点上改为+ASM2
export LANG=en
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
执行 $source .bash_profile命令,使设置生效。
oracle环境变量:
oracle $ vim .bash_profile
export ORACLE_BASE=/u01/app/oracle/
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome
export ORACLE_SID=chris1 --这里的和dbca创建数据库时填写的全局数据库名和SID应该保持一致 ,node2上改为 chris2
export LANG=en
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$PATH
执行 $source .bash_profile命令,使设置生效。
8.编辑如下文件,设置相关的值。
1.编辑/etc/security/limits.conf ,在文本的最后添加如下行:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
9.编辑/etc/pam.d/login 在文本的最后添加如下行:
session required /lib64/security/pam_limits.so
编辑/etc/profile ,在文本的最后添加如下行:
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
10.编辑/etc/sysctl.conf
kernel.shmmax = 4294967295
kernel.shmall = 268435456
net.ipv4.tcp_max_syn_backlog = 65536
net.core.netdev_max_backlog = 32768
net.core.somaxconn = 8192
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 0
kernel.core_uses_pid = 1
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
fs.aio-max-nr=1048576
执行
#sysctl –p命令,使设置生效。
12:添加网卡 internal network 模式,启动后配置静态IP为192.168.179.150
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
IPADDR=192.168.179.150
NETMASK=255.255.255.0
NETWORK=120.197.95.0
ONBOOT=yes
HWADDR=00:0c:29:2c:28:35
# /etc/init.d/network restart
11:配置hostname,hosts
10.13.12.150 rac1 rac1.ad.com
10.13.12.151 rac2 rac2.ad.com
192.168.1.150 rac1-priv
192.168.1.151 rac2-priv
10.13.12.155 rac1-vip
10.13.12.156 rac2-vip
10.13.12.152 ad-cluster ad-cluster-scan
十二:
1:clone一个test_rac1 ,一个test_rac2 ,建立一个share_storage目录,专门存储共享磁盘文件,注意在攒机共享磁盘时,要选择fixed size
在test_rac1中创建共享磁盘,share_disk1,share_disk2 ,share_disk_voting_4 存放crs信息
share_disk_data3 存放数据
2:设置磁盘共享 , file --virtual media manager 将以上3个文件modify为shared
3:在test_rac2 中添加磁盘 ,选择choose existing disk ,后选择那4个共享磁盘
十三:配置系统
1:两节点都改为静态ip,发现同一个磁盘在2个节点下的盘符是不一样的,使用udev绑定磁盘,
[root@rac1 ~]# fdisk -l
Disk /dev/sda: 25.7 GB, 25769803776 bytes
255 heads, 63 sectors/track, 3133 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 3133 25061400 8e Linux LVM
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
[root@rac2 ~]# fdisk -l
Disk /dev/sda: 25.7 GB, 25769803776 bytes
255 heads, 63 sectors/track, 3133 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 3133 25061400 8e Linux LVM
Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
共享了4个盘每个5G设置好后,使用udev绑定盘符:
# touch /etc/udev/rules.d/touch 99-oracle-asmdevices.rules
# vim /etc/udev/rules.d/touch 99-oracle-asmdevices.rules
for i in b c d e;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""
done
[root@rac1 rules.d]# cat !$
cat 99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB77c6d2b5-16007211_", NAME="asm-b_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VB7d0c7f3b-d289b35f_", NAME="asm-c_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBd1a2c502-80189553_", NAME="asm-d_crs", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="SATA_VBOX_HARDDISK_VBb9935999-ff6b34be_", NAME="asm-e_data", OWNER="grid", GROUP="asmadmin", MODE="0660"
拷贝一份到另一节点,后 start_udev
[root@rac1 rules.d]# ls /dev/sd* -l
brw-r----- 1 root disk 8, 0 Jun 17 2013 /dev/sda
brw-r----- 1 root disk 8, 1 Jun 17 10:25 /dev/sda1
brw-r----- 1 root disk 8, 2 Jun 17 2013 /dev/sda2
brw-r----- 1 root disk 8, 16 Jun 17 2013 /dev/sdb
brw-r----- 1 root disk 8, 32 Jun 17 2013 /dev/sdc
brw-r----- 1 root disk 8, 48 Jun 17 2013 /dev/sdd
brw-r----- 1 root disk 8, 64 Jun 17 2013 /dev/sde
[root@rac1 rules.d]# ls /dev/asm* -l
brw-rw---- 1 grid root 8, 16 Jun 17 11:06 /dev/asm-b_crs
brw-rw---- 1 grid root 8, 32 Jun 17 11:06 /dev/asm-c_crs
brw-rw---- 1 grid root 8, 64 Jun 17 11:06 /dev/asm-d_crs
brw-rw---- 1 grid root 8, 48 Jun 17 11:06 /dev/asm-e_data
[root@rac2 rules.d]# ls /dev/asm* -l
brw-rw---- 1 grid asmadmin 8, 48 Jun 17 11:16 /dev/asm-b_crs
brw-rw---- 1 grid asmadmin 8, 64 Jun 17 11:16 /dev/asm-c_crs
brw-rw---- 1 grid asmadmin 8, 32 Jun 17 11:16 /dev/asm-d_crs
brw-rw---- 1 grid asmadmin 8, 16 Jun 17 11:16 /dev/asm-e_data
[root@rac2 rules.d]# ls /dev/sd* -l
brw-r----- 1 root disk 8, 0 Jun 17 2013 /dev/sda
brw-r----- 1 root disk 8, 1 Jun 17 10:33 /dev/sda1
brw-r----- 1 root disk 8, 2 Jun 17 2013 /dev/sda2
brw-r----- 1 root disk 8, 16 Jun 17 2013 /dev/sdb
brw-r----- 1 root disk 8, 32 Jun 17 2013 /dev/sdc
brw-r----- 1 root disk 8, 48 Jun 17 2013 /dev/sdd
brw-r----- 1 root disk 8, 64 Jun 17 2013 /dev/sde
十四:共享LINUX下的文件到虚拟机下比较麻烦,主要是没 vboxsf模块
挂载时报错
[root@rac1 mnt]# mount -t vboxsf /data1/rac_file/grid grid
mount: unknown filesystem type 'vboxsf'
所以改为scp到虚拟机下,拷贝gird与oracle安装文件
scp -r [email protected]@/tmp/xxx ./
十五:安装grid
[root@rac1 grid_inst_file]# chown grid:oinstall ../grid_inst_file -R
[root@rac1 grid_inst_file]# chown oracle:oinstall ../oracle_inst_file -R
一下2节点都要操作
=============================================
--解决包ntp 失败:
停用ntpd时间同步(oracle会使用内部的时间同步机制CTSS):
# /sbin/service ntpd stop
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.original
还要删除以下文件:
[root@racnode1 ~]# rm /var/run/ntpd.pid
此文件保存了 NTP 后台程序的 pid。
=======================================================
--解决cvuqdisk-1.0.9-1.rpm 依赖性
[root@rac1 ~]# cd /mnt/grid_inst_file/grid/rpm/
[root@rac1 rpm]# ls
cvuqdisk-1.0.9-1.rpm
[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]
[root@rac1 rpm]# scp cvuqdisk-1.0.9-1.rpm 192.168.1.151:/tmp
The authenticity of host '192.168.1.151 (192.168.1.151)' can't be established.
RSA key fingerprint is f2:8f:81:1e:f3:5d:df:e6:1a:b5:ed:58:1f:af:c5:5e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.1.151' (RSA) to the list of known hosts.
[email protected]'s password:
cvuqdisk-1.0.9-1.rpm
[root@rac2 ~]# rpm -ivh /tmp/cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk ########################################### [100%]
================================================================================
--解决elfutils-libelf-devel-0.125依赖性
[root@rac1 rpm]# yum install elfutils-libelf-devel -y
===========================================================================
--可以忽略的检查错误
device checks for asm --没安装asmlib,使用的udev绑定,可以忽略
task resolv.conf integrity --这个是因为无法访问设置的DNS ip,对安装没影响
==========================================
[grid@rac1 grid]$ ./runcluvfy.sh stage -post hwos -n rac1,rac2
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "10.13.12.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "10.13.12.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rac2,rac1
TCP connectivity check passed for subnet "192.168.1.0"
Interfaces found on subnet "10.13.12.0" that are likely candidates for a private interconnect are:
rac2 eth0:10.13.12.151
rac1 eth0:10.13.12.150
Interfaces found on subnet "192.168.1.0" that are likely candidates for a private interconnect are:
rac2 eth1:192.168.1.151
rac1 eth1:192.168.1.150
WARNING:
Could not find a suitable set of interfaces for VIPs
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.13.12.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "10.13.12.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.13.12.0" for multicast communication with multicast group "230.0.1.0" passed.
Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.
Check of multicast communication passed.
Check for multiple users with UID value 0 passed
Time zone consistency check passed
Checking shared storage accessibility...
No shared storage found
Shared storage check failed on nodes "rac2,rac1"
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@rac1 grid]$
安装:
[grid@rac1 grid_inst_file ~]$ export LANG=en
[grid@rac1 grid_inst_file ~]$ ./runInstaller
step 1 of 8 :
install and configure grid infrastructure for a cluster
step 2 of 8 :
advanced installation
step 3 of 8 :
simplified chinese and english
step 4 of 8 :
cluster name : ad-cluster --这里的填写要根据hosts里面定义的scan域名而来
scan name: ad-cluster-scan
scan port : 1521
不勾选 configure GNS,即使用DNS中定义的查询方式
step 5 of 16 :
把第一个节点检测的信息修改为 rac1 rac1-vip
add -- hostname: rac2 (这里的输入根据host里面的public ip来) virtual ip name : rac2-vip --添加第二节点信息
ssh connectivity -- os username : grid os passowrd : grid --setup --test
step 6 of 16 :
eth0 120.197.95.0 public
eth1 192.168.8.0 private
此步基本检测是正确的,不需要修改
step 7 of 15:
automatic storage management(ASM)
step 8 of 15:
change discovery path -- disk discovery path : /dev/asm* --ok
disk group name : CRS_DATA
redundancy : external 如果选择normal需要至少选择3块磁盘,high则至少需要5块
all disks
选择/dev/asm-b-crs 和 /dev/asm-c-crs /dev/asm-d-crs 来存放OCR
step 9 of 15:
use sam passwords for these accounts --重复输入两次密码,记住不要输入特殊字符 ,这里输入 oracleasm 这里是针对 sys ,asmsnmp 用户设置口令
step 10 of 16:
do not use intelligent platform management interface(IPMI)
step 11 of 16:
OSDBA GROUP : asmdba
OSOPER GROUP: asmoper
OSASM GROUP: asmadmin
step 12 of 16:
oracle base : /u01/app/grid
software location : /u01/app/11.2.0.3/grid --注意这里的路径都是11.2.0.3,核对下
step 13 of 17:
Inventory dorectory : /data/u01/app/oraInventory
step 14 of 17:
勾选ignore all
step 15 of 17:
finished
中途出现了这样的警告:
WARNING: Error while copying directory /u01/app/11.2.0.3/grid with exclude file list '/tmp/OraInstall2013-06-22_03-26-31PM/installExcludeFile.lst' to nodes 'rac2'.
[Connection closed by 10.13.12.151 :failed]
Refer to '/u01/app/oraInventory/logs/installActions2013-06-22_03-26-31PM.log' for details. You may fix the errors on the required remote nodes.
Refer to the install guide for error recovery. Click 'Yes' if you want to proceed. Click 'No' to exit the install. Do you want to continue?
[grid@rac1 grid]$ cat /tmp/OraInstall2013-06-22_03-26-31PM/installExcludeFile.lst
/u01/app/11.2.0.3/grid/cfgtoollogs/cfgfw
[grid@rac1 grid]$ ll /u01/app/11.2.0.3/grid/cfgtoollogs/cfgfw
total 4
-rw------- 1 grid oinstall 1261 Jun 22 15:34 CfmLogger_2013-06-22_03-34-58-PM.log
-rw-r--r-- 1 grid oinstall 0 Jun 22 15:34 CfmLogger_2013-06-22_03-34-58-PM.log.lck
-rw------- 1 grid oinstall 0 Jun 22 15:34 OuiConfigVariables_2013-06-22_03-34-58-PM.log
-rw-r--r-- 1 grid oinstall 0 Jun 22 15:34 OuiConfigVariables_2013-06-22_03-34-58-PM.log.lck
-rw------- 1 grid oinstall 0 Jun 22 15:34 oracle.assistants.asm_2013-06-22_03-34-58-PM.log
-rw-r--r-- 1 grid oinstall 0 Jun 22 15:34 oracle.assistants.asm_2013-06-22_03-34-58-PM.log.lck
-rw------- 1 grid oinstall 0 Jun 22 15:34 oracle.assistants.netca.client_2013-06-22_03-34-58-PM.log
-rw-r--r-- 1 grid oinstall 0 Jun 22 15:34 oracle.assistants.netca.client_2013-06-22_03-34-58-PM.log.lck
-rw------- 1 grid oinstall 0 Jun 22 15:34 oracle.crs_2013-06-22_03-34-58-PM.log
-rw-r--r-- 1 grid oinstall 0 Jun 22 15:34 oracle.crs_2013-06-22_03-34-58-PM.log.lck
点击OK,继续,接下来提示运行脚本
[root@rac1 rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 app]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 app]# /u01/app/11.2.0.3/grid/root.sh --此时由于失误,错误的在一开始就在2节点上运行了,没办法,既然错了,看下面有什么错误的地方,在虚拟机此脚本大概运行了十几分钟
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
已成功创建并启动 ASM。
已成功创建磁盘组CRSDATA。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 94402cc33bc24f83bf35d6623a3990c9.
Successful addition of voting disk 2ab935b263ee4f9fbf2eb0113d401181.
Successful addition of voting disk 6c1570ab94364f33bf0e30e8c251deab.
Successfully replaced voting disk group with +CRSDATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 94402cc33bc24f83bf35d6623a3990c9 (/dev/asm-b_crs) [CRSDATA]
2. ONLINE 2ab935b263ee4f9fbf2eb0113d401181 (/dev/asm-c_crs) [CRSDATA]
3. ONLINE 6c1570ab94364f33bf0e30e8c251deab (/dev/asm-d_crs) [CRSDATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.CRSDATA.dg' on 'rac2'
CRS-2676: Start of 'ora.CRSDATA.dg' on 'rac2' succeeded
OC4J 无法启动
PRCR-1079 : 无法启动资源 ora.oc4j
CRS-2674: Start of 'ora.oc4j' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.oc4j' on that would satisfy its placement policy
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac2'
CRS-2676: Start of 'ora.registry.acfs' on 'rac2' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
在第一节点运行:
[root@rac1 rpm]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
Mounting Disk Group CRSDATA failed with the following message:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "CRSDATA" cannot be mounted
ORA-15003: diskgroup "CRSDATA" already mounted in another lock name space
Configuration of ASM ... failed
see asmca logs at /u01/app/grid/cfgtoollogs/asmca for details
Did not succssfully configure and start ASM at /u01/app/11.2.0.3/grid/crs/install/crsconfig_lib.pm line 6763.
/u01/app/11.2.0.3/grid/perl/bin/perl -I/u01/app/11.2.0.3/grid/perl/lib -I/u01/app/11.2.0.3/grid/crs/install /u01/app/11.2.0.3/grid/crs/install/rootcrs.pl execution failed
报错了,而且集群启动不了
[root@rac1 rpm]# su - grid
[grid@rac1 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
到此时,RAC安装失败,所以只能卸载了GI,再重新安装,在第2节点和第1节点实行卸载操作
[grid@rac2 deinstall]$ cd /u01/app/11.2.0.3/grid/deinstall
[grid@rac2 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Location of logs /tmp/deinstall2013-06-22_05-02-43PM/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/11.2.0.3/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0.3/grid
The following nodes are part of this cluster: rac1,rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2013-06-22_05-02-43PM/logs//crsdc.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2013-06-22_05-02-43PM/logs/netdc_check2013-06-22_05-09-06-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2013-06-22_05-02-43PM/logs/asmcadc_check2013-06-22_05-09-07-PM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/11.2.0.3/grid.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +CRSDATA
ASM diskstring : /dev/asm*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'.
Do you want to modify above information (y|n) [n]:
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0.3/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /u01/app/11.2.0.3/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y ---主要这里输入y,删除ASM配置,其余的一路回车,下面提示root运行脚本回车即可
A log of this session will be written to: '/tmp/deinstall2013-06-22_05-02-43PM/logs/deinstall_deconfig2013-06-22_05-07-30-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2013-06-22_05-02-43PM/logs/deinstall_deconfig2013-06-22_05-07-30-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2013-06-22_05-02-43PM/logs/asmcadc_clean2013-06-22_05-09-42-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2013-06-22_05-02-43PM/logs/netdc_clean2013-06-22_05-11-23-PM.log
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "rac1".
/tmp/deinstall2013-06-22_05-02-43PM/perl/bin/perl -I/tmp/deinstall2013-06-22_05-02-43PM/perl/lib -I/tmp/deinstall2013-06-22_05-02-43PM/crs/install /tmp/deinstall2013-06-22_05-02-43PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-06-22_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "rac2".
/tmp/deinstall2013-06-22_05-02-43PM/perl/bin/perl -I/tmp/deinstall2013-06-22_05-02-43PM/perl/lib -I/tmp/deinstall2013-06-22_05-02-43PM/crs/install /tmp/deinstall2013-06-22_05-02-43PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-06-22_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands
<----------------------------------------
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
^[[B^[[BOracle Universal Installer cleanup completed with errors.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2013-06-22_05-02-43PM' on node 'rac2'
Clean install operation removing temporary directory '/tmp/deinstall2013-06-22_05-02-43PM' on node 'rac1'
XML file /tmp/deinstall2013-06-22_05-02-43PM/deinstall.xml does not exist in the path specified. Verify the location and restart the application.
## [END] Oracle install clean ##
######################### CLEAN OPERATION END #########################
####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Oracle Clusterware is stopped and successfully de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
Oracle Universal Installer cleanup completed with errors.
Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'rac2,rac1' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL & DECONFIG TOOL END #############
上面提示使用root执行 rm -rf /opt/ORCLfmap
[root@rac2 ~]# rm -rf /opt/ORCLfmap
使用root执行在2节点上
[root@rac2 ~]# /tmp/deinstall2013-06-22_05-02-43PM/perl/bin/perl -I/tmp/deinstall2013-06-22_05-02-43PM/perl/lib -I/tmp/deinstall2013-06-22_05-02-43PM/crs/install /tmp/deinstall2013-06-22_05-02-43PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2013-06-22_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2013-06-22_05-02-43PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Network exists: 1/10.13.12.0/255.255.255.0/eth0, type static
VIP exists: /rac2-vip/10.13.12.156/10.13.12.0/255.255.255.0/eth0, hosting node rac2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac2'
CRS-2677: Stop of 'ora.registry.acfs' on 'rac2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac2' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac2'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac2'
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac2'
CRS-2677: Stop of 'ora.mdnsd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac2' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac2'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac2' succeeded
CRS-2677: Stop of 'ora.cssd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac2'
CRS-2677: Stop of 'ora.crf' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac2'
CRS-2677: Stop of 'ora.gipcd' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac2'
CRS-2677: Stop of 'ora.gpnpd' on 'rac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 ~]#
在节点1上同样操作
好,全部卸载后再来安装
[root@rac1 ~]# rm /u01 -rf
[root@rac2 ~]# rm /u01 -rf
2节点分别运行
mkdir -p /u01/app/grid
mkdir -p /u01/app/11.2.0.3/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
在2个节点运行
[root@rac1 ~]# rpm -ivh /mnt/grid_inst_file/grid/rpm/cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
[root@rac2 ~]# rpm -ivh /tmp/cvuqdisk-1.0.9-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
在第1节点运行安装
[root@rac1 ~]# ./runInstaller
中途报错:
SEVERE: Remote 'AttachHome' failed on nodes: 'rac2'. Refer to '/u01/app/oraInventory/logs/installActions2013-06-22_05-58-40PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=
Please refer 'AttachHome' logs under central inventory of remote nodes where failure occurred for more details.
所以在已经连接的rac2上执行,但依然报错:
[grid@rac2 deinstall]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=rac2
Error in GetCurrentDir(): 2
Error in GetCurrentDir(): 2
Error in GetCurrentDir(): 2
Starting Oracle Universal Installer...
sh: /command_output_5864: Permission denied
Checking swap space: 0 MB available, 500 MB required. Failed <<<<
Some requirement checks failed. You must fulfill these requirements before
continuing with the installation,
Exiting Oracle Universal Installer, log for this session can be found at /u01/app/oraInventory/logs/AttachHome2013-06-22_09-05-53PM.log
[grid@rac2 deinstall]$ vim /u01/app/oraInventory/logs/AttachHome2013-06-22_09-05-53PM.log
[grid@rac2 deinstall]$ free
total used free shared buffers cached
Mem: 3952596 3900544 52052 0 177456 3467532
-/+ buffers/cache: 255556 3697040
Swap: 5996536 0 5996536
但这里发现swap有5G多没使用,奇怪了
没办法,启动虚拟机grid的图形化界面,再次运行脚本,起劲出现了,呵呵
[grid@rac2 ~]$ /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=rac2
正在启动 Oracle Universal Installer...
检查交换空间: 必须大于 500 MB。 实际为 5855 MB 通过
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
请在会话结束时执行 '/u01/app/oraInventory/orainstRoot.sh' 脚本。
'AttachHome' 成功。
点击RAC1上的图形化界面的OK,继续安装,马上出现运行脚本的界面
[root@rac1 ~]# hostname
rac1.ad.com
[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]# /u01/app/11.2.0.3/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/11.2.0.3/grid ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CLUSTER_NODES=rac1,rac2 "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=rac2
The user is root. Oracle Universal Installer cannot continue installation if the user is root.
: No such file or directory
[root@rac2 ~]# hostname
rac2.ad.com
[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh
更改权限/u01/app/oraInventory.
添加组的读取和写入权限。
删除全局的读取, 写入和执行权限。
更改组名/u01/app/oraInventory 到 oinstall.
脚本的执行已完成。
[root@rac1 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
已成功创建并启动 ASM。
已成功创建磁盘组CRSDATA。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 2b1bd0c122584f5abf72033b2b2d26bd.
Successful addition of voting disk 2bc03776cdd94f5cbfb9165c473fdb0e.
Successful addition of voting disk 3b43c39513a64f2dbf7083a9510ada89.
Successfully replaced voting disk group with +CRSDATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 2b1bd0c122584f5abf72033b2b2d26bd (/dev/asm-b_crs) [CRSDATA]
2. ONLINE 2bc03776cdd94f5cbfb9165c473fdb0e (/dev/asm-c_crs) [CRSDATA]
3. ONLINE 3b43c39513a64f2dbf7083a9510ada89 (/dev/asm-d_crs) [CRSDATA]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.CRSDATA.dg' on 'rac1'
CRS-2676: Start of 'ora.CRSDATA.dg' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'rac1'
CRS-2676: Start of 'ora.registry.acfs' on 'rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]#
[root@rac2 ~]# /u01/app/11.2.0.3/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0.3/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
接下来点击下一步,之后完成安装
[grid@rac2 /]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[grid@rac2 /]$
此时集群软件安装完毕
=========================================================================================
ASMCA 创建数据磁盘
[grid@rac1 /]$ asmca
disk groups --create
disk group name : fra
redundancy : normal
三个300G,fra4 即/dev/sdm4 勾选了quorum
disk groups --create
disk group name : data
redundancy : external
三个300G,fra4 即/dev/sdm4 勾选了quorum
最后的磁盘容量信息为:
sysdg使用了3个1G做normal,现在总容量为2.87G,空闲1.97G,可用的 0.5G
fra使用了3个300G做normal,其中一个磁盘勾选了quorum,现在总容量为558.81G,空闲558.62G,可用的 93.04G
data使用了一个分区1300G做extern,现在总容量为1300。81G,空闲1300.71G,可用的1300.71G
见同目录下的asm_disk_info.jpg
====================================================================
安装oracle软件后,dbca建库
使用oracle用户登录系统安装
oracle base : /u01/app/oracle/
software location : /u01/app/oracle/product/11.2.0.3/dbhome
用户组 dba oinstall
中间安装到执行root.sh脚本时,发现节点1被踢出集群了,而且重启或者停止集群都报错,但不可以重启系统,因为oracle软件还有最后一步就安装完毕了,要解决此故障才行
[root@rac1 bin]# ./crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
从以上看来集群失败
[root@rac1 bin]# ./crs_start -all
CRS-0184: Cannot communicate with the CRS daemon.
[root@rac1 bin]# ./crs_start -all
CRS-0184: Cannot communicate with the CRS daemon.
[root@rac1 bin]# ./crsctl start crs
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
[root@rac1 bin]# ./crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@rac1 bin]# ./crs_stop -all
CRS-0184: Cannot communicate with the CRS daemon.
看,使用上面的命令无法停止集群,也无法启动他
...........
从上面的报错来看,应该是ASM实例异常关闭了导致的,这里也顺便 df -h 查看了一下,发现根目录已经使用100%了,呵呵,应该是这里导致ASM失败的吧,
使用grid登录启动ASM实例,
Connected to an idle instance.
ASMCMD> ls
ASMCMD-8102: no connection to ASM; command requires ASM to run
ASMCMD> startup
ASM instance started
Total System Global Area 283930624 bytes
Fixed Size 2227664 bytes
Variable Size 256537136 bytes
ASM Cache 25165824 bytes
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15003: diskgroup "DATA" already mounted in another lock name space
ORA-15017: diskgroup "CRSDATA" cannot be mounted
ORA-15003: diskgroup "CRSDATA" already mounted in another lock name space
ASMCMD> ls
CRSDATA/
DATA/
ASMCMD> cd CRSDATA
ASMCMD-8001: diskgroup 'CRSDATA' does not exist or is not mounted
ASMCMD> cd DATA
ASMCMD-8001: diskgroup 'DATA' does not exist or is not mounted
从上面看出其实是不正常的,无法访问CRSDATA和DATA磁盘组,虽然RAC2是正常的,这也不代表RAC1无法访问磁盘组,
而且日志报错:
[root@rac1 grid]# ls /u01/app/11.2.0.3/grid/log/rac1/alertrac1.log
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
2013-06-29 16:13:35.185
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
所以登录节点2去查看ASMCMD下面的磁盘组,也提示无法查询,问题应该就在这里了,猜想是:
节点1非正常的关闭了ASM1实例,导致ASM2实例也无法访问,但RAC2上./crsctl stat res -t 是正常的,估计是假象
所以没办法关闭RAC2的集群 (这里如果可以重启RAC1主机系统,可以自动解决问题,不需要操作RAC2的,所以这里猜测可以kill -9 集群的所有进程,后crs_start -all应该就可以解决问题)
[root@rac2 ~]# cd /u01/app/11.2.0.3/grid/bin/
[root@rac2 bin]$ ./crsctl stop crs
................
正常关闭,再重新启动
[root@rac2 bin]$ ./crsctl start crs
访问磁盘组正常
但RAC1依然问题依旧,所以准备关闭ASM后再重新启动ASM
ASMCMD> shutdown immediate
ORA-15100: invalid or missing diskgroup name
ORA-15100: invalid or missing diskgroup name
ASM instance shutdown
Connected to an idle instance.
ASMCMD> startup
ASM instance started
Total System Global Area 283930624 bytes
Fixed Size 2227664 bytes
Variable Size 256537136 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
ASM diskgroups volume enabled
ASMCMD> ls
CRSDATA/
DATA/
ASMCMD> ls CRSDATA
ad-cluster/
呵呵,到这里看到可以正常访问磁盘组了,集群应该恢复了,查看日志
[root@rac1 grid]# tail -f /u01/app/11.2.0.3/grid/log/rac1/alertrac1.log
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
2013-06-29 16:13:35.185
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
2013-06-29 16:14:12.158
[/u01/app/11.2.0.3/grid/bin/oraagent.bin(8988)]CRS-5019:All OCR locations are on ASM disk groups [CRSDATA],
and none of these disk groups are mounted. Details are at "(:CLSN00100:)" in "/u01/app/11.2.0.3/grid/log/rac1/agent/ohasd/oraagent_grid/oraagent_grid.log".
2013-06-29 16:14:50.848
[crsd(18409)]CRS-1012:The OCR service started on node rac1.
2013-06-29 16:14:55.085
[crsd(18409)]CRS-1201:CRSD started on node rac1.
看到没,提示CRSD已经成功启动了 ,好,到此运行图形化界面提示的脚本,完成oracle软件的安装
[root@rac1 ~]# /u01/app/oracle/product/11.2.0.3/dbhome/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@rac2 ~]# /u01/app/oracle/product/11.2.0.3/dbhome/root.sh --第二节点
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0.3/dbhome
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
其实可以从上面看出,其实是现在集群不正常,也不影响最后的脚本运行,就只更改了些文件和权限,不需要2节点相互通信,到这里艰难的ORACLE软件安装完毕,
主要是虚拟机性能太差劲导致安装的这么困难,以前使用真实机器安装真是顺畅,痛快,速度,哎
之后由于虚拟机的时间不一致,但oracle内部ctss是激活状态,还是因时间不一致报错了:
Removal of this node from cluster in 14.350 seconds
[crsd(14317)]CRS-2772:Server 'rac1' has been assigned to pool 'Free'.
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
2013-06-29 16:16:29.919
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
2013-06-29 16:16:35.956
[ctssd(14014)]CRS-2411:The Cluster Time Synchronization Service will take a long time to perform time synchronization as
local time is significantly different from mean cluster time. Details in /u01/app/11.2.0.3/grid/log/rac2/ctssd/octssd.log.
2013-06-29 16:20:02.065
[cssd(13928)]CRS-1612:Network communication with node rac1 (1) missing for 50% of timeout interval. Removal of this node from cluster in 14.350 seconds
2013-06-29 16:20:09.088
[cssd(13928)]CRS-1611:Network communication with node rac1 (1) missing for 75% of timeout interval. Removal of this node from cluster in 7.330 seconds
2013-06-29 16:20:14.104
[cssd(13928)]CRS-1610:Network communication with node rac1 (1) missing for 90% of timeout interval. Removal of this node from cluster in 2.320 seconds
2013-06-29 16:20:16.427
[cssd(13928)]CRS-1632:Node rac1 is being removed from the cluster in cluster incarnation 267398270
2013-06-29 16:20:16.458
[cssd(13928)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac2 .
2013-06-29 16:20:16.478
[crsd(14317)]CRS-5504:Node down event reported for node 'rac1'.
2013-06-29 16:20:16.551
[ctssd(14014)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac2.
2013-06-29 16:20:25.093
[crsd(14317)]CRS-2773:Server 'rac1' has been removed from pool 'Free'.
检测ctss状态:
[grid@rac1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 6000
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): -400
[grid@rac1 ~]$ crsctl stat resource ora.ctssd -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ctssd
1 ONLINE ONLINE rac1 ACTIVE:0
在rac2上禁用ctss
[grid@rac2 ~]$ su - root
Password:
[root@rac2 ~]# vim /etc/ntp
ntp/ ntp.conf.original
[root@rac2 ~]# vim /etc/ntp
ntp/ ntp.conf.original
[root@rac2 ~]# mv /etc/ntp.conf.original /etc/ntp.conf
[root@rac2 ~]# exit
logout
[grid@rac2 ~]$ date
[grid@rac2 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 9300
[grid@rac2 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.
[grid@rac2 ~]$
=========================================================================
dbca
下面选中2个节点
database area : data
fast recovery area : fra
设置sys/system 密码: oraclesys
启用归档 编译归档参数 复制+fra到栏目即可
只选择EMR 但里面的各选项全部不选择
提示:
OEM只可以在一个节点上连接
emctl status dbconsole
提示设置oracle_unqname=prod
是https
使用sys用户登录
创建脚本保存位置:
/data/u01/app/oracle/admin/pro/scripts
安装完成,检查:
[grid@rac1 ~]$ crsctl stat res -t |more
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.DATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.LISTENER.lsnr
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.registry.acfs
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.chris.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[grid@rac1 ~]$
[grid@rac1 ~]$ ps -ef |grep ora_
oracle 14460 1 0 23:06 ? 00:00:00 ora_pmon_chris1
oracle 14462 1 0 23:06 ? 00:00:01 ora_psp0_chris1
oracle 14464 1 4 23:06 ? 00:00:45 ora_vktm_chris1
oracle 14470 1 0 23:06 ? 00:00:00 ora_gen0_chris1
oracle 14472 1 0 23:06 ? 00:00:01 ora_diag_chris1
oracle 14474 1 0 23:06 ? 00:00:00 ora_dbrm_chris1
oracle 14476 1 0 23:06 ? 00:00:00 ora_ping_chris1
oracle 14478 1 0 23:06 ? 00:00:00 ora_acms_chris1
oracle 14480 1 0 23:06 ? 00:00:06 ora_dia0_chris1
oracle 14482 1 0 23:06 ? 00:00:02 ora_lmon_chris1
oracle 14484 1 0 23:06 ? 00:00:03 ora_lmd0_chris1
oracle 14486 1 1 23:06 ? 00:00:15 ora_lms0_chris1
oracle 14490 1 0 23:06 ? 00:00:00 ora_rms0_chris1
oracle 14492 1 0 23:06 ? 00:00:00 ora_lmhb_chris1
oracle 14494 1 0 23:06 ? 00:00:00 ora_mman_chris1
oracle 14496 1 0 23:06 ? 00:00:00 ora_dbw0_chris1
oracle 14498 1 0 23:06 ? 00:00:00 ora_lgwr_chris1
oracle 14500 1 0 23:06 ? 00:00:00 ora_ckpt_chris1
oracle 14502 1 0 23:06 ? 00:00:01 ora_smon_chris1
oracle 14504 1 0 23:06 ? 00:00:00 ora_reco_chris1
oracle 14506 1 0 23:06 ? 00:00:00 ora_rbal_chris1
oracle 14508 1 0 23:06 ? 00:00:00 ora_asmb_chris1
oracle 14510 1 0 23:06 ? 00:00:01 ora_mmon_chris1
oracle 14514 1 0 23:06 ? 00:00:00 ora_mmnl_chris1
oracle 14516 1 0 23:06 ? 00:00:00 ora_d000_chris1
oracle 14518 1 0 23:06 ? 00:00:00 ora_mark_chris1
oracle 14520 1 0 23:06 ? 00:00:00 ora_s000_chris1
oracle 14526 1 0 23:06 ? 00:00:05 ora_lck0_chris1
oracle 14528 1 0 23:06 ? 00:00:00 ora_rsmn_chris1
oracle 14705 1 0 23:07 ? 00:00:00 ora_gtx0_chris1
oracle 14707 1 0 23:07 ? 00:00:00 ora_rcbg_chris1
oracle 14727 1 0 23:07 ? 00:00:00 ora_qmnc_chris1
oracle 14740 1 0 23:08 ? 00:00:00 ora_q000_chris1
oracle 14742 1 0 23:08 ? 00:00:00 ora_q001_chris1
oracle 14817 1 0 23:08 ? 00:00:03 ora_cjq0_chris1
oracle 15435 1 0 23:13 ? 00:00:00 ora_smco_chris1
oracle 15439 1 0 23:13 ? 00:00:00 ora_w000_chris1
oracle 15448 1 0 23:13 ? 00:00:01 ora_gcr0_chris1
oracle 15852 1 0 23:16 ? 00:00:00 ora_pz99_chris1
grid 16583 12048 0 23:22 pts/1 00:00:00 grep ora_
[grid@rac1 ~]$
============================================================================
关于VirtualBox的设置要注意以下几点:
0:分辨率最高是800x600,oracle的安装界面不可以显示完全,解决办法是修改配置文件:
没有安装增强包的配置文件是这样的:
[root@rac1 X11]# cat /etc/X11/xorg.conf.bak
# Xorg configuration created by pyxf86config
Section "ServerLayout"
Identifier "Default Layout"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
EndSection
Section "Device"
Identifier "Videocard0"
Driver "vesa"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Videocard0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
安装了增强包的配置文件是这样的:
[root@rac1 X11]# cat /etc/X11/xorg.conf_2.bak
# VirtualBox generated configuration file
# based on /etc/X11/xorg.conf.
# Xorg configuration created by pyxf86config
# Section "ServerLayout"
# Identifier "Default Layout"
# Screen 0 "Screen0" 0 0
# InputDevice "Keyboard0" "CoreKeyboard"
# EndSection
# Section "InputDevice"
# Identifier "Keyboard0"
# Driver "kbd"
# Option "XkbModel" "pc105"
# Option "XkbLayout" "us"
# EndSection
# Section "Device"
# Identifier "Videocard0"
# Driver "vesa"
# EndSection
# Section "Screen"
# Identifier "Screen0"
# Device "Videocard0"
# DefaultDepth 24
# SubSection "Display"
# Viewport 0 0
# Depth 24
# EndSubSection
# EndSection
Section "InputDevice"
Identifier "Keyboard[0]"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
Option "Protocol" "Standard"
Option "CoreKeyboard"
EndSection
Section "InputDevice"
Driver "mouse"
Identifier "Mouse[1]"
Option "Buttons" "9"
Option "Device" "/dev/input/mice"
Option "Name" "VirtualBox Mouse Buttons"
Option "Protocol" "explorerps/2"
Option "Vendor" "Oracle Corporation"
Option "ZAxisMapping" "4 5"
Option "CorePointer"
EndSection
Section "InputDevice"
Driver "vboxmouse"
Identifier "Mouse[2]"
Option "Device" "/dev/vboxguest"
Option "Name" "VirtualBox Mouse"
Option "Vendor" "Oracle Corporation"
Option "SendCoreEvents"
EndSection
Section "ServerLayout"
Identifier "Layout[all]"
InputDevice "Keyboard[0]" "CoreKeyboard"
InputDevice "Mouse[1]" "CorePointer"
InputDevice "Mouse[2]" "SendCoreEvents"
Option "Clone" "off"
Option "Xinerama" "off"
Screen "Screen[0]"
EndSection
Section "Monitor"
Identifier "Monitor[0]"
ModelName "VirtualBox Virtual Output"
VendorName "Oracle Corporation"
EndSection
Section "Device"
BoardName "VirtualBox Graphics"
Driver "vboxvideo"
Identifier "Device[0]"
VendorName "Oracle Corporation"
EndSection
Section "Screen"
SubSection "Display"
Depth 24
EndSubSection
Device "Device[0]"
Identifier "Screen[0]"
Monitor "Monitor[0]"
EndSection
根据网上搜集的资料应该增加如下几项即可:
Section "Device"
Identifier "Configured Video Device"
EndSection
Section "Monitor"
Identifier "Configured Monitor"
Driver "vboxvideo" --增加
EndSection
Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
SubSection "Display" --增加
Modes "1440x960" "1024x768" --增加
EndSubSection --增加
EndSection
因为我安装了增强包,所以我只修改了最后一段
Section "Screen"
SubSection "Display"
Depth 24
Modes "1024x768" --增加
EndSubSection
Device "Device[0]"
Identifier "Screen[0]"
Monitor "Monitor[0]"
EndSection
就搞定了
1:如果你没安装增强包,那么鼠标跟随是非常郁闷的,而且和主机的文件共享也不可以,报如下错误
[root@rac1 mnt]# mount -t vboxsf /data1/rac_file/grid grid
mount: unknown filesystem type 'vboxsf'
解决:
下载VBoxGuestAdditions_3.0.4.iso,上传到LINUX上,设置虚拟机cdrom使用此镜像包,启动虚拟机,进入到挂载目录
[root@rac1 cdrom]# ls
32Bit autorun.sh VBoxSolarisAdditions.pkg VBoxWindowsAdditions-x86.exe
64Bit VBoxLinuxAdditions-amd64.run VBoxWindowsAdditions-amd64.exe
AUTORUN.INF VBoxLinuxAdditions-x86.run VBoxWindowsAdditions.exe
注意,之后要点击虚拟机上的devices --Install Guest Additions ,之后再查看挂载的目录,你会发现,文件内容发生了变化
[root@rac1 cdrom]# ls
32Bit autorun.sh runasroot.sh VBoxWindowsAdditions-amd64.exe
64Bit cert VBoxLinuxAdditions.run VBoxWindowsAdditions.exe
AUTORUN.INF OS2 VBoxSolarisAdditions.pkg VBoxWindowsAdditions-x86.exe
此时才可以运行安装
[root@rac1 cdrom]# ./VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.2.12 Guest Additions for Linux............
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
Removing existing VirtualBox non-DKMS kernel modules [确定]
Building the VirtualBox Guest Additions kernel modules
Building the main Guest Additions module [确定]
Building the shared folder support module [确定]
Not building the VirtualBox advanced graphics driver as this Linux version is
too old to use it.
Doing non-kernel setup of the Guest Additions [确定]
Starting the VirtualBox Guest Additions [确定]
Installing the Window System drivers
Installing X.Org 7.1 modules [确定]
Setting up the Window System to use the Guest Additions [确定]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.
Installing graphics libraries and desktop services componen[确定]
重新虚拟机即可解决鼠标问题,共享还没测试过,使用的是scp的解决办法。
=======================================
如果被用于ASM的磁盘以前曾经被使用,现在使用时提示是备用,无法使用时,可以清空磁盘头
dd if=/dev/zero of=/dev/asm-b_crs bs=512 count=10
dd if=/dev/zero of=/dev/asm-c_crs bs=512 count=10
dd if=/dev/zero of=/dev/asm-d_crs bs=512 count=10