配置操作体系篇
1、 添加需要的组及用户
groupadd -g 200 oinstall groupadd -g 201 dba groupadd -g 202 oper groupadd -g 203 asmadmin groupadd -g 204 asmoper groupadd -g 205 asmdba useradd -u 200 -g oinstall -G dba,asmdba,oper oracle useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
asmadmin用户组:该用户组具有sysasm权限,可以管理oracle clusterware和oracle asm的权限需求,如果grid用户没有asmadmin权限,将无法操作asm disk group。
asmdba用户组: 该用户组具有读写和访问asm文件的权限,oracle用户组和GI用户都必须拥有这个权限,如果oracle用户没有asmdba权限,将无法使用asm上的文件,也就无法启动数据库。
asmoper用户组:该组和oper用户组类似都是额外的用户组,asmoper用户具有asm的sysoper权限,可以启动和关闭asm实例,默认情况下asmadmin用户就具有了asmoper用户组权限。
注意的oracle用户必须有asmdba权限,不然oracle数据库无法去读取asm上的文件,而无法启动数据库,而grid用户如果没有dba权限,则无法通过crs启动oracle database资源,关于权限的细化分析可以参考o的官档。
2、 创建所需目录
mkdir -p /u01/app/oraInventory chown -R grid:oinstall /u01/app/oraInventory/ chmod -R 775 /u01/app/oraInventory/ mkdir -p /u01/app/grid chown -R grid:oinstall /u01/app/grid chmod -R 775 /u01/app/grid mkdir -p /u01/app/11.2.0/grid chown -R grid:oinstall /u01/app/11.2.0/grid/ chmod -R 775 /u01/app/11.2.0/grid/ mkdir -p /u01/app/oracle mkdir -p /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/app/oracle3、设置环境变量
Oracle环境变量 .bash_profile export EDITOR=vi export ORACLE_SID=fyl1 export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export LD_LIBRARY_PATH=$ORACLE_HOME/lib export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin umask 022 grid 环境变量 .bash_profile export EDITOR=vi export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/app/grid export ORACLE_HOME=/u01/app/11.2.0/grid export LD_LIBRARY_PATH=$ORACLE_HOME/lib export THREADS_FLAG=native export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin umask 022不同节点copy时注意改ORACLE_SID
注意在不同节点粘贴时,修改ORACLE_SID。例如2节点,ORACLE_SID=fyl2
4、 配置主机host文件
[root@rac2 ~]# more /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.6.21 rac1 192.168.6.22 rac1-vip 10.10.10.1 rac1-priv 192.168.6.31 rac2 192.168.6.32 rac2-vip 10.10.10.2 rac2-priv 192.168.6.41 rac-scan5、修改内核参数及限制
[root@rac1 ~]# vi /etc/sysctl.conf ------文件最后加一下内容 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 [root@rac1 ~]# sysctl –p ---------使文件立即生效 [root@rac1 ~]# vi /etc/security/limits.conf oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 [root@rac1 ~]# vi /etc/pam.d/login session required /lib/security/pam_limits.so -----使limits.so生效 [root@rac1 ~]# vi /etc/profile if [ $USER = "oracle" ]||[ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi [root@rac1 ~]# chkconfig ntpd off ----关掉ntp服务,使用crs的ctss同步时间 [root@rac1 ~]# chkconfig sendmail off [root@rac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
6、 配置节点间信任关系
Oracle 用户下rac1/rac2下同时运行 生成密钥文件 [oracle@rac1 ~]$ mkdir .ssh [oracle@rac1 ~]$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: c1:20:05:00:ae:9c:b3:18:b5:ca:f9:77:e8:9e:54:42 oracle@rac1 [oracle@rac1 ~]$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: d2:7d:92:82:17:d1:d2:33:0c:c3:02:3b:dc:8d:9b:fc oracle@rac1 [oracle@rac1 ~]$ cd .ssh [oracle@rac1 .ssh]$ ls id_dsa id_dsa.pub id_rsa id_rsa.pub [oracle@rac1 .ssh]$ cd [oracle@rac1 ~]$ cat .ssh/id_rsa.pub >>.ssh/authorized_keys [oracle@rac1 ~]$ cat .ssh/id_dsa.pub >>.ssh/authorized_keys [oracle@rac1 ~]$ ssh rac2 cat .ssh/id_rsa.pub >>.ssh/authorized_keys [oracle@rac1 ~]$ ssh rac2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys [oracle@rac1 ~]$ scp .ssh/authorized_keys rac2:~/.ssh 测试信任关系(必须全部测试除了VIP地址、两节点都需要执行) ssh rac2 date ssh rac2-priv date ssh rac1-priv date ssh rac1 date grid用户重复以上工作(两个节点同时运行) [grid@rac1 ~]$ mkdir .ssh [grid@rac1 ~]$ ssh-keygen -t rsa [grid@rac1 ~]$ ssh-keygen -t dsa [grid@rac1 ~]$ cat .ssh/id_rsa.pub >>.ssh/authorized_keys [grid@rac1 ~]$ cat .ssh/id_dsa.pub >>.ssh/authorized_keys [grid@rac1 ~]$ ssh rac2 cat .ssh/id_rsa.pub >>.ssh/authorized_keys [grid@rac1 ~]$ ssh rac2 cat .ssh/id_dsa.pub >>.ssh/authorized_keys [grid@rac1 ~]$ scp .ssh/authorized_keys rac2:~/.ssh 测试信任关系(必须全部测试除了VIP地址、两节点都需要执行) ssh rac2 date ssh rac2-priv date ssh rac1-priv date ssh rac1 date
7、 解决共享磁盘问题
两节点一起加载一块20G共享磁盘 确保两节点fdisk –l 都可以查看到 Rac1对磁盘分区 [root@rac1 tmp]# fdisk -l Disk /dev/sda: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 2610 20860402+ 8e Linux LVM Disk /dev/sdb: 21.4 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdb doesn't contain a valid partition table [root@rac1 tmp]# fdisk /dev/sdb 分区后结果,两节点查看 Device Boot Start End Blocks Id System /dev/sdb1 1 250 2008093+ 83 Linux /dev/sdb2 251 500 2008125 83 Linux /dev/sdb3 501 750 2008125 83 Linux /dev/sdb4 751 1249 4008217+ 5 Extended /dev/sdb5 751 1249 4008186 83 Linux 编辑文件 [root@rac1 rules.d]# pwd /etc/udev/rules.d [root@rac1 rules.d]# more 60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="sdb3", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sdb4", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdb5", RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", ENV{MAJOR}=="120", ENV{MINOR}=="1901", RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add", ENV{MAJOR}=="120", ENV{MINOR}=="1902", RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add", ENV{MAJOR}=="120", ENV{MINOR}=="1903", RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add", ENV{MAJOR}=="120", ENV{MINOR}=="1904", RUN+="/bin/raw /dev/raw/raw4 %M %m" ACTION=="add", ENV{MAJOR}=="120", ENV{MINOR}=="1905", RUN+="/bin/raw /dev/raw/raw5 %M %m" ACTION=="add", KERNEL=="raw[1-5]",owner="grid", GROUP="asmadmin", MODE="660" [root@rac2 ~]# start_udev Starting udev: [ OK ] [root@rac2 ~]# raw -qa /dev/raw/raw1: bound to major 8, minor 17 /dev/raw/raw2: bound to major 8, minor 18 /dev/raw/raw3: bound to major 8, minor 19 /dev/raw/raw4: bound to major 8, minor 20 /dev/raw/raw5: bound to major 8, minor 21
8、 利用udev解决共享存储
http://www.oracledatabase12g.com/?s=udev
http://blog.csdn.net/tianlesoftware/article/details/7433344
还可以使用asmlib及多路径软件Multipath实现共享存储
9、检测grid安装环境,并yum加载缺失的包[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup –verbose 配置yum,加载检测“failed”的包 [root@rac1 ~]# cd /etc/yum.repos.d [root@rac1 yum.repos.d]# cp rhel-debuginfo.repo yum.repo [root@rac1 yum.repos.d]# vi yum.repo [base] name=Red Hat Enterprise Linux baseurl=file:///media/Server enabled=1 gpgcheck=0 [root@rac1 yum.repos.d]# mount /dev/cdrom /media [root@rac1 yum.repos.d]# yum clean all [root@rac1 yum.repos.d]# yum install -y sysstat* [root@rac1 yum.repos.d]# yum install -y libaio* [root@rac1 yum.repos.d]# yum install -y unix*