GPFS安装

 

1.      确认所有服务器上都可以看到同一个LUN

登陆每台服务器上使用fdisk -l  进行确认。

2.      关闭所有服务器的防火墙&Selinux

service iptables stop

chkconfig iptables off

setenforce   0

 

vi /etc/selinux/config

将开头不为“#”的一行修改为: SELINUX=disabled

3.      在每台服务器/etc/hosts里添加其它主机IP主机名映射

过程略

4.      将每台服务器设置为SSH不需要密码

过程略

5.      使用光盘搭建Redhat的安装源并安装以下依赖包

 搭建过程略,然后使用yum install ** 方式将下列包全部安装

rpm-build

libstdc++

compat-libstdc++

libXp

imake

gcc-c++

kernel

kernel-headers

kernel-devel

kernel-smp

kernel-smp-devel

xorg-x11-xauth

 

6.      安装gpfs软件(每台服务器上都安装)

1. 使用光盘搭建redhat源,

  yum install /tmp/gpfs/*.rpm

  rpm -qa gpfs验证软件是否已经正确安装

  2. 编译相关软件,安装并已后的rpm包

  cd /usr/lpp/mmfs/src && make Autoconfig && make World && make InstallImages

  [root@linux1 src]# make rpm

  安装编译完自动生成的RPM 包

  yum install /root/rpmbuild/RPMS/x86_64/gpfs.gplbin-2.6.32-131.0.15.el6.x86_64-3.5.0-0.x86_64.rpm

  3. 修改PATH路径(二台主机上运行)

  在3 台主机上修改HOME目录下的。bash_profile文件,在文件末尾增加

export PATH=$PATH:/usr/lpp/mmfs/bin

source /root/.bash_profile

  4. 新建目录,用做GPFS文件系统(二台主机上运行)

  mkdir /gpfs_home

  5. 创建gpfs 群集配置文件

  [root@wtydb21 tmp]# vi /tmp/gpfsprofile

  wtydb21:quorum-manager

  wtydb22:quorum-manager

  6. 创建集群,注意指定ssh方式

  [root@wtydb21 pam.d]#

  [root@wtydb21 gpfs]# mmcrcluster -N /tmp/gpfsprofile  -p wtydb21 -s wtydb22  -C gpfs_cluster -r /usr/bin/ssh -R /usr/bin/scp

  Sat Apr  6 12:17:35 CST 2013: mmcrcluster: Processing node wtydb21

  Sat Apr  6 12:17:35 CST 2013: mmcrcluster: Processing node wtydb22

  Sat Apr  6 12:17:38 CST 2013: mmcrcluster: Processing node wtydb23

  mmcrcluster: Command successfully completed

  mmcrcluster: Warning: Not all nodes have proper GPFS license designations.

  Use the mmchlicense command to designate licenses as needed.

  mmcrcluster: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

  mmcrcluster 命令其中参数含义

  -C bgbcrun 设定集群名称

  -U bgbc 定义域名

  -N /tmp/gpfs/nodefile 指定节点文件名

  -p NSD1 指定主NSD 服务器为 NSD1

  -s NSD1 指定备NSD 服务器为 NSD1

  7. 接受许可协议

  [root@wtydb21 pam.d]# mmchlicense server --accept -N wtydb21,wtydb22

  8. 确认创建集群状况

  [root@wtydb21 ~]# mmlscluster

  [root@wtydb21 gpfs]# mmlscluster

  GPFS cluster information

  ========================

  GPFS cluster name:         gpfs_cluster.wtydb21

  GPFS cluster id:           12146727015547904479

  GPFS UID domain:           gpfs_cluster.wtydb21

  Remote shell command:      /usr/bin/ssh

  Remote file copy command:  /usr/bin/scp

  GPFS cluster configuration servers:

  -----------------------------------

  Primary server:    wtydb21

  Secondary server:  wtydb22

  Node  Daemon node name  IP address   Admin node name  Designation

  -------------------------------------------------------------------

  1   wtydb21           10.4.52.101  wtydb21          quorum-manager

  2   wtydb22           10.4.52.102  wtydb22          quorum-manager

  

  9. 生成nsd 盘,使用/dev/sdb (根据实际环境决定,也可能不是/dev/sdb)

  [root@wtydb21 etc]# vi /tmp/nsdprofile

  /dev/sdb: wtydb21: wtydb22:dataAndMetadata:::   # 这里如果只有二个节点,需要加上二个节点的名称,三个或三个以上还未测试。

  

[root@wtydb21 gpfs]# mmcrnsd -F /tmp/nsdprofile

  mmcrnsd: Processing disk sdb

  mmcrnsd: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

  此时系统自动修改/tmp/gpfsnsdprofile 文件内容如下:

  [root@wtydb21 ~]# cat /tmp/nsdprofile

  # dm-4:::dataAndMetadata::

  gpfs1nsd:::dataAndMetadata:-1::system

  10. 启动集群

  [root@wtydb21 /]# mmstartup -a

  .

  [root@wtydb21 src]# mmgetstate -a -L

  Node number  Node name       Quorum  Nodes up  Total nodes  GPFS state  Remarks

  ------------------------------------------------------------------------------------

  1      wtydb21            2        2          3       active      quorum node

  2      wtydb22            2        2          3       active      quorum node

  11. 创建gpfs 文件系统

  [root@wtydb21 src]# mmcrfs /gpfs_home gpfs_lv -F /tmp/nsdprofile -A yes -n 30 -v no

  The following disks of gpfs_lv will be formatted on node wtydb21:

  gpfs1nsd: size 100829184 KB

  Formatting file system ...

  Disks up to size 848 GB can be added to storage pool system.

  Creating Inode File

  Creating Allocation Maps

  Creating Log Files

  Clearing Inode Allocation Map

  Clearing Block Allocation Map

  Formatting Allocation Map for storage pool system

  Completed creation of file system /dev/gpfs_lv.

  mmcrfs: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

  参数含义如下:

  /datafs 文件系统 mount 点名

  gpfs_lv 指定文件系统 lv 名

  -F 指定 NSD 的文件名

  -A 自动 mount 选项为 yes

  -B 块大小为64K

  -n 挂载文件系统的节点估计数30 个

  -v 校验建立磁盘是否已有文件系统 为否

  12. 挂载文件系统

  [root@wtydb21 /]# mount /gpfs_home

  [root@wtydb21 /]# df

  13. 设置gpfs 开机自启

  [root@wtydb21 /]# mmchconfig autoload=yes

  14. 查询GPFS 配置信息

  [root@wtydb21 share]# mmlsconfig

  [root@wtydb21 share]# mmgetstate -a

  15. 备注:第一次安装gpfs后,版本不对,导致数据库不能正确启动,下载并升级后3.5.0.2后问题解决

  [root@sdcmpdb1 ~]# mmstartup -a

  mmstartup: Required service not applied. Install GPFS 3.5.0.1 or later.

  mmstartup: Command failed.  Examine previous error messages to determine cause.

 

 

 

7.      GPFS相关命令参考

The default path of GPFS commands is: /usr/lpp/mmfs/bin GPFS status

#mmgetstate -a

 

 Node number  Node name        GPFS state
------------------------------------------
       1      <node 1>         active
       2      <node 2>         active
       3      <node 3>         active
       4      <node 4>        active
       5      <node 5>         active
       6      <node 6>         active

 

#mmgetstate -a -L

 Node number  Node name       Quorum  Nodes up  Total nodes  GPFS state  Remarks   
------------------------------------------------------------------------------------
       1      <node 1>          1*        2          6       active      quorum node
       2      <node 2>          1*        2          6       active      quorum node
       3      <node 3>          1*        2          6       active     
       4      <node 4>          1*        2          6       active     
       5      <node 5>          1*        2          6       active     
       6      <node 6>          1*        2          6       active     

 

After a start, the GPFS state can be “arbitrating” after a minute the status becomes “active” like above

#mmgetstate -a -L

 Node number  Node name       Quorum  Nodes up  Total nodes  GPFS state  Remarks   
------------------------------------------------------------------------------------
       1      <node 1>          1*        0          6       arbitrating     quorum node
       2      <node 2>          1*        0          6       arbitrating     quorum node
       3      <node 3>          1*        0          6       arbitrating    
       4      <node 4>          1*        0          6       arbitrating    
       5      <node 5>          1*        0          6       arbitrating    
       6      <node 6>          1*        0          6       arbitrating    

GPFS start all nodes

#mmstartup -a

GPFS start local nodes

#mmstartup

GPFS stop all nodes

#mmshutdown -a  

GPFS stop local nodes

#mmshutdown

 

GPFS logging:

# tail -f /var/adm/ras/mmfs.log.latest

List active mounts:

# mmlsmount all
File system gpfstestlv is mounted on 6 nodes.

List Storage Pools:

# mmlsfs all -P
File system attributes for /dev/gpfstestlv:
===========================================
flag                value                    description
------------------- ------------------------ -----------------------------------
 -P                 system                   Disk storage pools in file system

List Disks in each filesystem

# mmlsfs all -d

File system attributes for /dev/gpfstestlv:
===========================================
flag                value                    description
------------------- ------------------------ -----------------------------------
 -d                 gpfs7nsd;gpfs8nsd        Disks in file system

List current NSD (Network shared disks)

# mmlsnsd -M

 Disk name    NSD volume ID      Device         Node name                Remarks
---------------------------------------------------------------------------------------
 gpfs1nsd     0A158E144FC88AFB   /dev/hdisk8    <node 1>
 gpfs1nsd     0A158E144FC88AFB   /dev/hdisk8    <node 2>
 gpfs1nsd     0A158E144FC88AFB   /dev/hdisk5    <node 3>
 ..
 gpfs2nsd     0A158E144FC88AFC   /dev/hdisk9    <node 6>
 ..

List filesystem manager node(s)

# mmlsmgr
file system      manager node
---------------- ------------------
gpfstestlv       10.21.148.30 (<node 1>)

Cluster manager node: 10.21.148.30 (<node 1>)

你可能感兴趣的:(GPFS安装)