RHCS集群安装与配置

集群或者多个集群指的是运行红帽高可用性附加组件的一组计算机。

实验环境:rhel6.5 iptables&selinux disabled

实验主机:192.168.2.251(luci节点)

                192.168.2.137 192.168.2.138(ricci节点)

三台主机都必须配置高可用yum源:

[base]
name=Instructor Server Repository
baseurl=http://192.168.2.251/pub/rhel6.5
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# HighAvailability rhel6.5
[HighAvailability]
name=Instructor HighAvailability Repository
baseurl=http://192.168.2.251/pub/rhel6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# LoadBalancer packages
[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://192.168.2.251/pub/rhel6.5/LoadBalancer
# ResilientStorage
[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://192.168.2.251/pub/rhel6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

# ScalableFileSystem
[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://192.168.2.251/pub/rhel6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

并且三台主机都得时间同步。

192.168.2.251给自身和另两个主机做解析,137和138给自身和彼此做解析!

在137和138上yum安装ricci yum install ricci -y 在251上yum安装luci yum install luci -y

安装完成之后,给ricci添加密码。启动ricci并且设置成开机自启动。

#passwd ricci

#/etc/init.d/ricci start

#chkconfig ricci on

#chkconfig luci on

启动luci

wKioL1PjHCmwKvUTAAEZsJXbyJQ543.jpg

然后在firefox上访问上图的链接地址

wKioL1PjHLjgj1-dAAFpCG7i0aU104.jpg

输入root用户和密码进入web配置界面

wKioL1PjHRKgtFJpAADxPoPz35k363.jpg

添加集群

wKiom1PjHvzxFG_FAAFADOJdKRc809.jpg

创建这个集群的过程中它会自动安装所需要的安装包,并且两个节点会重新启动。

创建完成之后要确定以下几个服务是开启的状态:

wKioL1PjIy2TJ8TpAABmpstadCQ385.jpg

在命令行用clustat查询两个节点的连接情况

wKioL1PjIc6DT5eUAABw-reXkqM293.jpg

在137和138两个节点的/etc/cluster下会自动生成cluster.conf和cman-notify.d

wKioL1PjIoLCc0MVAADjvMfY-fg280.jpg

添加fence隔离设备

在251上安装fence-virtd-libvirt,fence-virtd-multicast,fence-virtd这几个yum包
# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Available backends:
libvirt 0.1
Available listeners:
multicast 1.0
Listener modules are responsible for accepting requests
from fencing clients.
Listener module [multicast]:
The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.
The multicast address is the address that a client will use to
send fencing requests to fence_virtd.
Multicast IP Address [225.0.0.12]:
Using ipv4 as family.
Multicast IP Port [1229]:
Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.
Interface [none]: br0
The key file is the shared key information which is used to
authenticate fencing requests. The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.
Key File [/etc/cluster/fence_xvm.key]:
Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.
Backend module [libvirt]:
The libvirt backend module is designed for single desktops or
servers. Do not use in environments where virtual machines
may be migrated between hosts.
Libvirt URI [qemu:///system]:
Configuration complete.
=== Begin Configuration ===
backends {
libvirt {
uri = "qemu:///system";
}
}
listeners {
multicast {
key_file = "/etc/cluster/fence_xvm.key";
interface = "br0";
port = "1229";
address = "225.0.0.12";
family = "ipv4";
}
}
fence_virtd {
backend = "libvirt";
listener = "multicast";
module_path = "/usr/lib64/fence-virt";
}
=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

除了红色部分改动之外,其他部分一路回车就行

然后在251的/etc/cluster目录下用下面这个命令生成key文件:

#dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

把生成的fence_xvm.key这个文件scp到137和138的/etc/cluster下,然后启动fence

wKiom1PjJHOwDSjCAABwqUuiEiw479.jpg

在web界面添加fence Devices

wKioL1PjJgmC0in2AAHcSkfGEdM276.jpg

然后进到集群节点添加fence method

wKiom1PjJjbQQL9ZAAB1J9kO7cA782.jpg

再添加 fence instance

wKioL1PjJ6Tyqk5BAACWl8_9nPE186.jpg

上图中的Domain是此节点主机的UUID

用fence命令隔离其中一个节点测试一下,在/var/log/cluster/下查看fence日志 cat fence.log

wKioL1PjKiCjpX5oAAFnhHc8FYc172.jpg

添加失效备援(Failover Domain)名字自己随便定义

wKioL1PjKr-DFcavAAC-ahgxoCk408.jpg

添加资源Resource

浮动IP

wKiom1PjKkKDHCsDAADRDiulV8M110.jpg

添加脚本

wKiom1PjKoyjbYB2AACIQ7RtlJw698.jpg

在137和138两个节点上安装httpd

添加服务组

wKioL1PjLBezUFE1AADtXP45YpQ437.jpg   

把资源都添加到组里面

wKiom1PjK9LC5ANTAADxdxgJSFM855.jpg

然后启动刚刚添加的那个组,这个组在哪个节点上运行,哪个节点的httpd就会自动开启

wKioL1PjLXyyLiOxAACbt7ZnoFY101.jpg

在137的/var/www/html/下写一个测试文件

#echo `hostname` > index.html

然后用浏览器访问刚刚绑定的那个浮动IP 192.168.2.111 就会看到你写的那个测试文件

用clusvcadm -e www 和 clusvcadm -d www 可以启动和关闭www服务组

wKioL1PjLpCzc2KEAAAyq23ztMw246.jpg

这个命令可以用来迁移服务组到另一个节点。

添加分布式存储

在251上划分出一块LVM分区

wKioL1PjL0bx65f2AAA3I92dlyo957.jpg

在251上安装  

# yum install scsi-target-utils.x86_64 -y

在137和138上安装

#yum install iscsi-initiator-utils -y
然后编辑/etc/tgt/targets.conf

wKioL1PjMJuSxSdeAAC-Nr2VzX4526.jpg

然后启动tgtd

/etc/init.d/tgtd start

在137和138上用iscsi命令发现并登录这块共享的分区

wKioL1PjMdGxHya_AAFRgpsXgh0016.jpg

在这块磁盘上划分出一块住分区

wKiom1PjMiqBOHNYAAFs3Zj5Xoo477.jpg

划分完成之后格式化 mkfs.ext4 /dev/sda1

在web界面添加filesystem这个资源

wKioL1PjM9Pw-QwFAADmHe35jx8189.jpg

同上把它添加到服务组

#clusvcadm -d www

现在将此设备先挂载到/var/www/html下

启动服务

#clusvcadm -e www

#clustat     查看服务是否启动成功

网络文件系统 gfs2

clusvcadm -d www   先停掉服务组 

然后在web界面删掉wwwdata这个资源

 在共享的那块磁盘划分出一块id为8e的分区

wKioL1PjNiOS54icAAEdYLCYv8E142.jpg

然后把它做成LVM

wKiom1PjNeGhqHjwAAAw4Pe7vuQ637.jpg

wKiom1PjNhbSaZjjAABkqVAAexg452.jpg

配置/etc/lvm/lvm.conf

wKioL1PjN3zA7PtIAAFmqo-U77E426.jpg

wKioL1PjN8KA1-vuAAA-yUPfAM4193.jpg

格式化gfs2分区并且挂载

wKiom1PjNwKg7KcMAAFG1-DiDus869.jpg

设置安全上下文

wKioL1PjOJDTbsLbAAB37JQhHqM492.jpg

把这快分区写到/etc/fstab里

wKiom1PjN_-DsF6-AAEBREq1Pjc773.jpg

在web界面添加资源全局文件系统(gfs2)并添加到资源组

wKioL1PjOmDgWMoGAADoHVVlZl0663.jpg

到这,就基本上RHCS配置可以告一段落了!!





你可能感兴趣的:(rhcs,luci,gfs2,ricci)