Redhat 7.5 Oracle 11g RAC ASM+裸设备 安装小记

Redhat 7.5 Oracle 11g RAC ASM+裸设备 安装小记

一、安装环境
  1、简介
  2、规划
二、外部环境准备
  1、系统安装
  2、硬件、源和时间
  3、iscsi 服务器配置
  4、DNS 服务器配置
三、安装环境准备
  1、DNS客户端
  2、iSCSI 客户端
  3、裸设备配置
  4、创建用户&用户组
  5、ssh 免密登录配置
  6、依赖包安装
  7、用户环境变量设置.bash_profile
  8、系统参数
四、安装事项
  1、集群测试
  2、oracle cluster 安装
  3、Oracle 程序安装
五、经典问题
  1、RHEL 7 及以后系统版本 图形化安装过程中弹出框是一个小竖条,且无法放大的问题
  2、cluster 脚本报错:Failed to create keys in the OLR, rc = 127 libcap.so.1
  3、cluster 脚本报错:ohasd failed to start
  4、asmca 图形化管理界面中弹出框是一个小竖条

安装感受:
  复杂~~

一、安装环境

1、简介

  本次安装环境是 VMware&VSphere 6.0 环境的两台虚拟机。(并不是很熟悉这套虚拟化产品,所以中间趟了很多雷。)
  RAC安装系统为 Redhat 7.5(对于安装 11g 来说不是最佳的兼容系统,中间会出现很多不可预估的错误及警告)
  共享存储:由于没有外置存储,且虚拟化环境的存储共享问题出于个人能力问题实在解决不了,最后又创建了一台虚拟机用于做 SCSI 共享和 DNS 解析。(这么做实属无奈之举,个中辛酸不足为外人道也)

2、规划

NODE 资源 ip 规划 备注
1 CPU:2 Core
RAM:4 GB
DISK: 60 GB
192.168.0.101
192.168.0.103
192.168.1.11
whdatarac1
whdatarac1-vip
whdatarac1-priv

虚拟地址
心跳地址
2 CPU:2 Core
RAM:4 GB
DISK: 60 GB
192.168.0.102
192.168.0.104
192.168.1.12
whdatarac2
whdatarac2-vip
whdatarac2-priv

虚拟地址
心跳地址
3 CPU:1 Core
RAM:1 GB
DISK1: 20 GB
DISK2: 30 GB
DISK3: 20 GB
DISK4: 20 GB
DISK2: 30 GB
192.168.0.109 iscsi
SCSIDNS 服务器

shared1
shared2
shared3
shared4

二、外部环境准备

1、系统安装

NODE 主机名 系统安装 说明
1 whdatarac1 Development Tools
GUI
关闭kdump
2 whdatarac2 Development Tools
GUI
关闭kdump
3 iscsi 最简安装 关闭kdump

2、硬件、源和时间

  1. RAC 节点各需要两块网卡,一块用来做普通地址,另一块用来作心跳地址。
     误区:VIP 不需要绑定到单独的网卡
  2. VSphere 模拟的网络环境是一个交换机,只需要单独划分 VLAN 即可。
     误区:单独划分 VLAN 直接配置和原始网段不同的地址
  3. Redhat 系统的在线源替换为相应版本的 CentOS 系统源。
  4. RHEL 7 中使用 timedatectl 控制时间同步,执行 timedatectl set-timezone UTC 否则系统时间和北京时间差8小时。
  5. 关闭防火墙和 SElinux

3、iscsi 服务器配置

  安装服务组件: yum -y install targetd targetcli
  设置服务组件启动及开机自启:

systemctl start targetd  
systemctl enable targetd  

  在图形化交互界面创建需要分享的硬盘或者分区:

# 如下所示:将 /dev/sdb 以 iscsi1 的名称共享:
[user@nnn ~]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb34
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> ls
o- / ................................................................................................. [...]
o- backstores ........................................................................................ [...]
| o- block ............................................................................ [Storage Objects: 0]
| o- fileio ........................................................................... [Storage Objects: 0]
| o- pscsi ............................................................................ [Storage Objects: 0]
| o- ramdisk .......................................................................... [Storage Objects: 0]
o- iscsi ...................................................................................... [Targets: 0]
o- loopback ................................................................................... [Targets: 0
/> cd /backstores/block
/backstores/block> create iscsi1 /dev/sdb
Created block storage object iscsi1 using /dev/sdb.
/backstores/block>ls
/> ls
o- / ................................................................................................. [...]
  o- backstores ...................................................................................... [...]
  | o- block .......................................................................... [Storage Objects: 4]
  | | o- scsi1 ................................................... [/dev/sdb (30.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi2 ................................................... [/dev/sdc (30.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi3 ................................................... [/dev/sdd (20.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi4 ................................................... [/dev/sde (20.0GiB) write-thru activated]
  | |   o- alua ........................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | o- fileio ......................................................................... [Storage Objects: 0]
  | o- pscsi .......................................................................... [Storage Objects: 0]
  | o- ramdisk ........................................................................ [Storage Objects: 0]
  o- iscsi .................................................................................... [Targets: 0]
  o- loopback ................................................................................. [Targets: 0]

  创建iSCSI target名称,进入到 /iscsi 目录下,执行 create 命令,会自动创建 iqn 开头的 iscsi 共享项目:

/> cd /iscsi 
/iscsi> create
Created target iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e
Created TPG 1.

  其次,分别在该项目的 tgp1 目录下的 acllunsportals 目录下创建客户端连接名、共享的luns和共享地址及端口:
  创建共享IP和端口有可能不成功,ls 看一下,有可能已经存在相应的端口和地址了,deelete 删除即可

创建 luns:
/iscsi> cd iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e/
/iscsi/iqn.20....d497c356ad80> cd tpg1/luns
/iscsi/iqn.20...d80/tpg1/luns> create /backstores/block/iscsi1 
Created LUN 0.
/iscsi/iqn.20...d80/tpg1/luns> create /backstores/block/iscsi2 
Created LUN 1.
/iscsi/iqn.20...d80/tpg1/luns> create /backstores/block/iscsi3 
Created LUN 2.
/iscsi/iqn.20...d80/tpg1/luns> create /backstores/block/iscsi4 
Created LUN 3.
创建客户端连接名:
/iscsi/iqn.20...d80/tpg1/luns> cd ..
/iscsi/iqn.20...d80/tpg1> cd acl
/iscsi/iqn.20...d80/tpg1/acl> create iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e
Created Node ACL for iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e
创建连接IP及端口:
/iscsi/iqn.20...d80/tpg1/acls> cd ..
/iscsi/iqn.20...c356ad80/tpg1> cd portals 
/iscsi/iqn.20.../tpg1/portals> create 192.168.0.109
Using default IP port 3260
Created network portal 192.168.0.109:3260.

  查看一下配置结果:

/iscsi/iqn.20...0b039f3e/tpg1> cd /
/> ls
o- / ................................................................................................. [...]
  o- backstores ...................................................................................... [...]
  | o- block .......................................................................... [Storage Objects: 4]
  | | o- scsi1 ................................................... [/dev/sdb (30.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi2 ................................................... [/dev/sdc (30.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi3 ................................................... [/dev/sdd (20.0GiB) write-thru activated]
  | | | o- alua ........................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | | o- scsi4 ................................................... [/dev/sde (20.0GiB) write-thru activated]
  | |   o- alua ........................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ............................................... [ALUA state: Active/optimized]
  | o- fileio ......................................................................... [Storage Objects: 0]
  | o- pscsi .......................................................................... [Storage Objects: 0]
  | o- ramdisk ........................................................................ [Storage Objects: 0]
  o- iscsi .................................................................................... [Targets: 1]
  | o- iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e .................................... [TPGs: 1]
  |   o- tpg1 ....................................................................... [no-gen-acls, no-auth]
  |     o- acls .................................................................................. [ACLs: 1]
  |     | o- iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e ....................... [Mapped LUNs: 4]
  |     |   o- mapped_lun0 ......................................................... [lun0 block/scsi1 (rw)]
  |     |   o- mapped_lun1 ......................................................... [lun1 block/scsi2 (rw)]
  |     |   o- mapped_lun2 ......................................................... [lun2 block/scsi3 (rw)]
  |     |   o- mapped_lun3 ......................................................... [lun3 block/scsi4 (rw)]
  |     o- luns .................................................................................. [LUNs: 4]
  |     | o- lun0 .............................................. [block/scsi1 (/dev/sdb) (default_tg_pt_gp)]
  |     | o- lun1 .............................................. [block/scsi2 (/dev/sdc) (default_tg_pt_gp)]
  |     | o- lun2 .............................................. [block/scsi3 (/dev/sdd) (default_tg_pt_gp)]
  |     | o- lun3 .............................................. [block/scsi4 (/dev/sde) (default_tg_pt_gp)]
  |     o- portals ............................................................................ [Portals: 1]
  |       o- 192.168.0.109:3260 ....................................................................... [OK]
  o- loopback ................................................................................. [Targets: 0]
/> 

4、DNS 服务器配置

  安装服务组件:yum install bind-libs bind bind-utils
  设置服务组件启动及开机自启:

systemctl start named  
systemctl enable named  

  编辑 /etc/named.conf 文件:

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.html

options {
    listen-on port 53 { any; };  --将127.0.0.1修改成any
    listen-on-v6 port 53 { ::1; };
    directory   "/var/named";
    dump-file   "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    allow-query     { any; };  --将127.0.0.1修改成any

    /* 
     - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
     - If you are building a RECURSIVE (caching) DNS server, you need to enable 
       recursion. 
     - If your recursive DNS server has a public IP address, you MUST enable access 
       control to limit queries to your legitimate users. Failing to do so will
       cause your server to become part of large scale DNS amplification 
       attacks. Implementing BCP38 within your network would greatly
       reduce such attack surface 
    */
    recursion yes;

    dnssec-enable yes;
    dnssec-validation yes;

    /* Path to ISC DLV key */
    bindkeys-file "/etc/named.iscdlv.key";

    managed-keys-directory "/var/named/dynamic";

    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
    type hint;
    file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

  修改 /etc/host.conf 文件:

order bind,hosts    # 指定主机名查询顺序,这里规定先使用DNS来解析域名,然后再查询“/etc/hosts”文件(也可以相反)。
multi on    # 指定是否“/etc/hosts”文件中指定的主机可以有多个地址,拥有多个IP地址的主机一般称为多穴主机。
nospoof on    # 指不允许对该服务器进行IP地址欺骗。IP欺骗是一种攻击系统的手段,把IP地址伪装成别的计算机,来取得其它计算机的信任。

  修改 /etc/named.rfc1912.zones 文件:

// named.rfc1912.zones:
//
// Provided by Red Hat caching-nameserver package 
//
// ISC BIND named zone configuration for zones recommended by
// RFC 1912 section 4.1 : localhost TLDs and address zones
// and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt
// (c)2007 R W Franks
// 
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

zone "localhost.localdomain" IN {
    type master;
    file "named.localhost";
    allow-update { none; };
};

zone "localhost" IN {
    type master;
    file "named.localhost";
    allow-update { none; };
};

zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN {
    type master;
    file "named.loopback";
    allow-update { none; };
};

zone "1.0.0.127.in-addr.arpa" IN {
    type master;
    file "named.loopback";
    allow-update { none; };
};

zone "0.in-addr.arpa" IN {
    type master;
    file "named.empty";
    allow-update { none; };
};

zone "localdomain." IN {
        type master;
        file "localdomain.zone";
        allow-update { none; };
};

// 添加内容:(由于虚拟ip和心跳地址同在192.168.0.0/16网段,所以反向解析内容使用了16位地址,也可以使用24位地址)

zone "whdata-rac.com" IN {
        type master;
        file "whdata-rac.com.zone";
        allow-update { none; };
};

zone "168.192.in-addr.arpa" IN {
        type master;
        file "168.192.zone";
        allow-update { none; };
};

  在 /var/named/ 目录下分别创建正向解析文件和反向解析文件(名称要和上段规则中的正反向解析zone名称相同)
  注意:创建文件的所属用户用户组及操作权限,用户:root 用户组:named 权限:644
  正向解析 whdata-rac.com.zone

$TTL 1D
@   IN SOA  @ rname.invalid. (
                    0   ; serial
                    1D  ; refresh
                    1H  ; retry
                    1W  ; expire
                    3H )    ; minimum
    NS  @
    A   127.0.0.1
    AAAA    ::1

iscsi        IN     A       192.168.0.109
whdatarac-scan   IN     A       192.168.0.105
whdatarac-scan   IN     A       192.168.0.106
whdatarac-scan   IN     A       192.168.0.107
whdatarac1       IN     A       192.168.0.101
whdatarac2       IN     A       192.168.0.102
whdatarac1-vip   IN     A       192.168.0.103
whdatarac2-vip   IN     A       192.168.0.104
whdatarac1-priv  IN     A       192.168.1.11
whdatarac2-priv  IN     A       192.168.1.12

  反向解析 168.192.zone

$TTL 1D
@   IN SOA  @ rname.invalid. (
                    0   ; serial
                    1D  ; refresh
                    1H  ; retry
                    1W  ; expire
                    3H )    ; minimum
    NS  @
    A   127.0.0.1
    AAAA    ::1
    PTR localhost.

109.0    IN      PTR    iscsi.whdata-rac.com.
105.0    IN      PTR    whdatarac-scan.whdata-rac.com.
106.0    IN      PTR    whdatarac-scan.whdata-rac.com.
107.0    IN      PTR    whdatarac-scan.whdata-rac.com.
101.0    IN      PTR    whdatarac1.whdata-rac.com.
102.0    IN      PTR    whdatarac2.whdata-rac.com.
103.0    IN      PTR    whdatarac1-vip.whdata-rac.com.
104.0    IN      PTR    whdatarac2-vip.whdata-rac.com.
11.1     IN      PTR    whdatarac1-priv.whdata-rac.com.
12.1     IN      PTR    whdatarac2-priv.whdata-rac.com

  修改 /etc/resolv.conf 文件:
  该文件每次重启网卡或者主机重启都会覆盖原有内容,可通过 chattr +i /etc/resolv.conf 防止内容被覆盖,但是 oracle cluster 安装检测会报找不到相应节点的网络错误。
  很多帖子说可以通过修改网卡配置文件来达到永久生效的目的, 修改/etc/sysconfig/network-scripts/ifcfg-* 文件,添加:DOMAIN=whdata-rac.com 。但测试无法生效。

# Generated by NetworkManager
search whdata-rac.com
nameserver 192.168.0.109
nameserver 202.*.*.*

  将DNS设置为 192.168.0.109 ,重启DNS服务并且在 DNS 服务器测试:

systemctl restart  named.service
dig -x 192.168.0.109
nslookup whdatarac-sacn
nslookup 192.168.0.102

三、安装环境准备

  以下准备工作要在 oracle 两个节点执行

1、创建用户&用户组

groupadd -g 1000 oinstall
groupadd -g 1001 asmadmin
groupadd -g 1002 asmdba
groupadd -g 1003 asmoper
groupadd -g 1004 dba
groupadd -g 1005 oper
useradd -u 1000 -g oinstall -G wheel,asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1001 -g oinstall -G wheel,dba,asmdba,oper oracle
echo "whdata" | passwd --stdin grid 
echo "whdata" | passwd --stdin oracle 

2、DNS客户端

  a. 修改 /etc/hosts 文件,此步非必要,可以不执行

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

#DNS&iscsi
192.168.0.109 iscsi iscsi.whdata-rac.com
# Public
192.168.0.101 whdatarac1 whdatarac1.whdata-rac.com
192.168.0.102 whdatarac2 whdatarac2.whdata-rac.com
# Private
192.168.1.11 whdatarac1-priv whdatarac1-priv.whdata-rac.com
192.168.1.12 whdatarac2-priv whdatarac2-priv.whdata-rac.com
# Virtual
192.168.0.103 whdatarac1-vip whdatarac1-vip.whdata-rac.com
192.168.0.104 whdatarac2-vip whdatarac2-vip.whdata-rac.com
# SCAN
192.168.0.105 whdatarac-scan whdatarac-scan.whdata-rac.com
192.168.0.106 whdatarac-scan whdatarac-scan.whdata-rac.com
192.168.0.107 whdatarac-scan whdatarac-scan.whdata-rac.com

  b. 修改 /etc/resolv.conf 文件在 DNS 前加入 search whdata-rac.com
  c. 使用 nslookup 命令测试,生效。

3、iSCSI 客户端

  安装 iscsi 客户端:yum install iscsi-initiator-utils
  配置客户端连接名:

[root@whdatarac1 ~]# vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e

  启动iscsi客户端,并设置开机自启

[root@whdatarac1 ~]# systemctl restart iscsid
[root@whdatarac1 ~]# systemctl enable iscsid

  通过 iscsiadm 管理工具扫描远程iSCSI服务端,然后查看找到的服务端上有哪些可用的共享存储资源。
  -m discovery 参数的目的是扫描并发现可用的存储资源。
  -t st 参数为执行扫描操作的类型,
  -p 192.168.0.109 参数为iSCSI服务端的IP地址:

[root@whdatarac1 ~]# iscsiadm -m discovery -t st -p 192.168.0.109
192.168.0.109:3260,1 iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e

  登录 iSCSI 服务端:
  -m node 参数为将客户端所在主机作为一台节点服务器。
  -T iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e 参数为要使用的存储资源(也就是上面的输出结果)
  -p 192.168.0.109 参数依然为对方iSCSI服务端的IP地址。
  --login-l 参数进行登录验证。

[root@whdatarac1 ~]# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e -p 192.168.0.109 -l
Logging in to [iface: default, target: iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e, portal: 192.168.0.109,3260] (multiple)
Login to [iface: default, target: iqn.2003-01.org.linux-iscsi.scsi.x8664:sn.7e2a0b039f3e, portal: 192.168.0.109,3260] successful.

  然后查看,四块硬盘已经成功挂载:

[root@whdatarac1 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   60G  0 disk 
├─sda1          8:1    0  500M  0 part /boot
└─sda2          8:2    0 59.5G  0 part 
  ├─rhel-root 253:0    0 51.5G  0 lvm  /
  └─rhel-swap 253:1    0    8G  0 lvm  [SWAP]
sdb             8:16   0   30G  0 disk 
sdc             8:32   0   20G  0 disk 
sdd             8:48   0   20G  0 disk 
sde             8:64   0   30G  0 disk 

4、裸设备配置

  a. 修改规则文件

  vi /usr/lib/udev/rules.d/rules.d/60-raw.rules
  添加如下内容:

ACTION=="add", KERNEL=="sdb", RUN+="/usr/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc", RUN+="/usr/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd", RUN+="/usr/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde", RUN+="/usr/bin/raw /dev/raw/raw4 %N"
KERNEL=="raw1", OWNER="grid" GROUP="asmadmin", MODE="0660"
KERNEL=="raw2", OWNER="grid" GROUP="asmadmin", MODE="0660"
KERNEL=="raw3", OWNER="grid" GROUP="asmadmin", MODE="0660"
KERNEL=="raw4", OWNER="grid" GROUP="asmadmin", MODE="0660"

  b. 重启主机或者重新载入规则文件

  udevadm control --reload-rules

  c. 检查设备

[root@whdatarac1 ~]# ls -al /dev/raw/
total 0
drwxr-xr-x  2 root root        140 Dec 26 09:45 .
drwxr-xr-x 20 root root       3380 Dec 26 09:45 ..
crw-rw----  1 grid asmadmin 162, 1 Dec 28 12:04 raw1
crw-rw----  1 grid asmadmin 162, 2 Dec 26 09:45 raw2
crw-rw----  1 grid asmadmin 162, 3 Dec 28 12:04 raw3
crw-rw----  1 grid asmadmin 162, 4 Dec 28 12:04 raw4

5、ssh 免密登录配置

  a. 分别配置双节点 grid 用户和 oracle 用户间 ssh 免密登录。

ssh-keygen
cd 
cd .ssh
cp id_rsa.pub authorized_keys
scp authorized_keys [email protected]:~/.ssh/
ssh [email protected]
cd .ssh
cat id_rsa.pub >> authorized_keys
scp authorized_keys [email protected]:~/.ssh/

  b. 然后逐一登录测试(第一次需要输入yes保存主机信息)

ssh whdatarac1 date
ssh whdatarac1-vip date
ssh whdatarac1-priv date
ssh whdatarac1 date
ssh whdatarac1-vip date
ssh whdatarac1-priv date

6、依赖包安装

  根据官方文档,需要安装以下包,由于系统版本太高有些包安装不上,这里可以先忽略。

binutils
compat-libcap1
cpp
gcc
gcc-c++-
glibc
glibc-devel
glibc-headers
ksh
libaio
libaio-devel
libgcc
libstdc++-
libstdc++-devel
libXi-
libXtst
make
mpfr
sysstat

  ASM 相应的包安装:

kmod-oracleasm    #asm的依赖包,yum 安装
oracleasmlib    #oracle官网下载,rpm 安装报错,使用强制安装参数 --force
oracleasm-support    #oracle官网下载,rpm 安装报错,使用强制安装参数 --force

7、用户环境变量设置.bash_profile

  最基础的环境变量只需要声明四项即可,声明越多越容易出问题。
   grid 用户:

export ORACLE_SID=+ASM1
#export ORACLE_SID=+ASM2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH

   oracle 用户:

export ORACLE_SID=orcl1
#export ORACLE_SID=orcl1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db1
export PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin

8、系统参数

  /etc/sysctl.conf

kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.aio-max-nr = 1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

# sysctl -p 立即生效

  /etc/security/limits.conf

grid soft nproc 655350 
grid hard nproc 655350
grid soft nofile 655350 
grid hard nofile 655350 
grid soft stack 655350
grid hard stack 655350

orcle soft nproc 655350 
orcle hard nproc 655350
orcle soft nofile 655350 
orcle hard nofile 655350
oracle soft stack 655350
oracle hard stack 655350

  /etc/pam.d/login

session    required     /lib64/security/pam_limits.so
session    required     pam_limits.so

四、安装事项

1、集群测试

  整个安装过程开始之前,要先进行集群的可用性测试:
  先安装 cvuqdisk:安装包在 grid 的安装介质上的 rpm 目录中。
  测试命令如下:

./runcluvfy.sh stage -pre crsinst -n rac1,rac2  -fixup -verbose
./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose

2、oracle cluster 安装

  oracle cluster 的安装截图请参照其他安装手册。非关键步骤不插入截图。
  开始安装前 grid 软件会对集群可用性再次检测:

由于系统版本太高,我的警告内容主要如下:
 三个包无法安装或者安装了无法检测到:elfutils-libelf-devel compat-libcap1 libaio-devel
 解决方式: Ignore ALL
 安装到最后跑脚本的时候会报错,解决方式详见本安装记录第五部分 经典问题

  需要以 root 执行的脚本有如下两个,会报错的是后一个。

/u01/app/oraInventory/orainstRoot.sh 
/u01/app/11.2.0/grid/root.sh    #会执行失败

3、Oracle 程序安装

  只安装程序,过程无报错。

五、经典问题

1、RHEL 7 及以后系统版本 图形化安装过程中弹出框是一个小竖条,且无法放大的问题

  问题原因猜测:可能是由于 oracle 本身安装程序的 java 版本所致
  解决方式:调用本地的 jre 包,运行安装程序

./runInstaller -jreLoc /usr/lib/jvm/jre-1.8.0

2、cluster 脚本报错:Failed to create keys in the OLR, rc = 127 libcap.so.1

  报错详细内容如下:

Failed to create keys in the OLR, rc = 127, Message:
  /u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory 

Failed to create keys in the OLR at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7660.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

  报错是说找不到 libcap.so.1 库,我猜多半是和源里缺失而没有安装上的包有些关系,一般 linux 源里没有了的包都是被其他包所替代了,而 oracle 官方还没来得及修改自家的文档,先 find 一下名字差不太多的文件:

[root@whdatarac1 oui]# find / -name libcap*
find: ‘/run/user/1001/gvfs’: Permission denied
/usr/lib64/libcap-ng.so.0
/usr/lib64/libcap-ng.so.0.0.0
/usr/lib64/libcap.so.2
/usr/lib64/libcap.so.2.22
/usr/lib64/pkgconfig/libcap.pc
/usr/lib64/openssl/engines/libcapi.so
/usr/lib64/libcap.so.1
/usr/lib64/libcap.so
/usr/share/doc/libcap-ng-0.7.5
/usr/share/doc/libcap-2.22
/usr/share/doc/man-pages-overrides-7.5.2/libcap-ng
/usr/share/man/man3/libcap.3.gz

  进入 /usr/lib64/ 目录,列出相关的包:

[root@whdatarac1 lib64]# ls -al libcap.so*
lrwxrwxrwx  1 root root    11 Dec 26 11:33 libcap.so -> libcap.so.2
lrwxrwxrwx. 1 root root    14 Dec 17 14:41 libcap.so.2 -> libcap.so.2.22
-rwxr-xr-x. 1 root root 20032 Mar  6  2017 libcap.so.2.22

  很明显可以看到,真正的库文件应该是 libcap.so.2.22 其他两个都是软连接,那么我们不妨建立一个 libcap.so.1 的软连接指向它。问题解决。

3、cluster 脚本报错:ohasd failed to start

  详细报错内容如下:

CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

  这个是 Oraclebug ,解决办法如下:
  在执行root.sh之前执行以下命令

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

  重新执行 root.sh 之前先调用脚本删除配置:

/u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force-verbose

4、asmca 图形化管理界面中弹出框是一个小竖条

  此问题无解。不过可以通过命令行管理 asm 磁盘组。

你可能感兴趣的:(Redhat 7.5 Oracle 11g RAC ASM+裸设备 安装小记)