RHCS +GFS2+iscsi+cLVM实现高可用web集群

RHEL6.6-x86-64

软件源:

    epel源

    本地yum源



RHCS安装及配置

192.168.1.5   安装luci      两块硬盘, 其中/sdb提供共享存储。


集群节点

192.168.1.6  安装ricci     node1.mingxiao.info     node1

192.168.1.7  安装ricci     node2.mingxiao.info     node2

192.168.1.8 安装ricci      node3.mingxiao.info     node3



前提:

1> 192.168.1.5分别与node1,node2,node3互信

2> 时间同步

3> 节点主机名分别为node1.mingxiao.info、node2.mingxiao.info、node3.mingxiao.info


进主机192.168.1.5

vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.6 node1.mingxiao.info node1
192.168.1.7 node2.mingxiao.info node2
192.168.1.8 node3.mingxiao.info node3

# for I in {1..3}; do scp /etc/hosts node$I:/etc;done


时间同步

# for I in {1..3}; do ssh node$I 'ntpdate time.windows.com';done


节点间互信。

.....


# yum -y install luci

# for I in {1..3}; do ssh node$I 'yum -y install ricci'; done

# for I in {1..3}; do ssh node$I 'echo xiaoming | passwd --stdin ricci'; done

# for I in {1..3}; do ssh node$I 'service ricci start;chkconfig ricci on'; done

# for I in {1..3}; do ssh node$I 'yum -y install httpd'; done

# service luci start


浏览器输入https://192.168.1.5:8084访问web管理界面,输入root用户和root密码

wKiom1VKMnSht8amAAI31KguBow679.jpg


创建集群,Password输入各节点ricci用户密码,

wKiom1VKM4WjH-oSAAMuCgHxIuA545.jpg



添加资源,这里添加两个一个为VIP,一个为httpd,添好如下图界面

wKioL1VKNcrSZYmzAAJwi2tdaR0266.jpg

wKiom1VKNFjhZe0oAAIIEl_-9QU892.jpg



添加故障转移域,如下图:

wKioL1VKNr-z3XJoAAKdutg6NDw769.jpg


由于没有fences设备,所以不再添加。




接下来要配置iscsi,提供共享存储:

进192.168.1.5


# yum -y install scsi-target-utils

# for I in {1..3}; do ssh node$I 'yum -y install iscsi-initiator-utils'; done



vim /etc/tgt/targets.conf   添加如下内容
<target iqn.2015-05.ingo.mingxiao:ipsan.sdb>
  <backing-store /dev/sdb>
     vendor_id Xiaoming
     lun 1
  </backing-store>
  initiator-address 192.168.1.0/24
  incominguser iscsiuser xiaoming
</target>


# service tgtd start



# tgtadm --lld iscsi --mode target --op show  #可查看到如下信息
Target 1: iqn.2015-05.ingo.mingxiao:ipsan.sdb
    System information:
        Driver: iscsi
        State: ready
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 21475 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/sdb
            Backing store flags: 
    Account information:
        iscsiuser
    ACL information:
        192.168.1.0/24


# for I in {1..3}; do ssh node$I 'echo "InitiatorName=`iscsi-iname -p iqn.2015-05.info.mingxiao`" > /etc/iscsi/initiatorname.iscsi'; done



vim /etc/iscsi/iscsid.conf       #在node1、node2、node3启用下面三行
node.session.auth.authmethod = CHAP
node.session.auth.username = iscsiuser
node.session.auth.password = xiaoming



# ha ssh node$I 'iscsiadm -m discovery -t st -p 192.168.1.5';done
[  OK  ] iscsid: [  OK  ]
192.168.1.5:3260,1 iqn.2015-05.ingo.mingxiao:ipsan.sdb
[  OK  ] iscsid: [  OK  ]
192.168.1.5:3260,1 iqn.2015-05.ingo.mingxiao:ipsan.sdb
[  OK  ] iscsid: [  OK  ]
192.168.1.5:3260,1 iqn.2015-05.ingo.mingxiao:ipsan.sdb



# ha ssh node$I 'iscsiadm -m node -T iqn.2015-05.ingo.mingxiao:ipsan.sdb -p 192.168.1.5 -l ';done
Logging in to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] (multiple)
Login to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] successful.
Logging in to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] (multiple)
Login to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] successful.
Logging in to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] (multiple)
Login to [iface: default, target: iqn.2015-05.ingo.mingxiao:ipsan.sdb, portal: 192.168.1.5,3260] successful.


node1,node2,node3分别安装以下两个rpm包,rpmfind.net可找到

# rpm -ivh lvm2-cluster-2.02.111-2.el6.x86_64.rpm gfs2-utils-3.0.12.1-68.el6.x86_64.rpm 
warning: lvm2-cluster-2.02.111-2.el6.x86_64.rpm: Header V3 RSA/SHA1 Signature, key ID c105b9de: NOKEY
Preparing...                ########################################### [100%]
   1:gfs2-utils             ########################################### [ 50%]
   2:lvm2-cluster           ########################################### [100%]



进node1

# pvcreate /dev/sdb

# vgcreate clustervg /dev/sdb

# lvcreate -L 5G -n clusterlv clustervg

# mkfs.gfs2 -j 2 -p lock_dlm -t mycluster:sdb  /dev/clustervg/clusterlv



添加一个服务,名为webservice

wKioL1VKPJfBoYmSAAK26GD0JZU239.jpg


将先前VIP资源加入

wKiom1VKO6LQNTF0AAGCAjB4ddE271.jpg




新建一个GFS2资源,并加入

wKioL1VKPRSx5zdIAAG74As2pz8473.jpg


将先前httpd资源加入

wKiom1VKO6KQWZNrAAEeIHFU8YY095.jpg



然后启动即可。

查看状态:

# clustat
Cluster Status for mycluster @ Thu May  7 00:07:21 2015
Member Status: Quorate
 Member Name                                             ID   Status
 ------ ----                                             ---- ------
 node1.mingxiao.info                                         1 Online, Local, rgmanager
 node2.mingxiao.info                                         2 Online, rgmanager
 node3.mingxiao.info                                         3 Online, rgmanager
 Service Name                                   Owner (Last)                                   State         
 ------- ----                                   ----- ------                                   -----         
 service:webservice                             node1.mingxiao.info                            started


你可能感兴趣的:(iSCSI,rhcs,gfs2,clvm)