红帽HA(RHCS)ricci+luci+fence

RHEL6-红帽HARHCSricci+luci+fence

1.整体架构:

223056713.png

223058793.png

223100594.png

2实验环境:

luci管理机:192.168.122.1

ricci节点:192.168.122.34

192.168.122.33

192.168.122.82

yum仓库:

[rhel-source]

name=RedHat Enterprise Linux $releasever - $basearch - Source

baseurl=ftp://192.168.122.1/pub/rhel6.3

gpgcheck=0


[HighAvailability]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/HighAvailability

gpgcheck=0


[LoadBalancer]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/LoadBalancer

gpgcheck=0


[ResilientStorage]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ResilientStorage

gpgcheck=0


[ScalableFileSystem]

name=InstructorServer Repository

baseurl=ftp://192.168.122.1/pub/rhel6.3/ScalableFileSystem

gpgcheck=0


要将红色部分加入yum'仓库;


说明:

不支持在集群节点中使用NetworkManager。如果您已经在集群节点中安装了NetworkManager,您应该 删除或者禁用该程序。


2.环境配置:

以下的操作在所有的HA节点上进行:

在所有的ha节点上安装ricci 在客户端(要有web浏览器)上安装luci

# yum-y install ricci

[root@desk82~]# chkconfig ricci on

[root@desk82~]# /etc/init.d/ricci start

[root@desk82~]# passwd ricci #一定要有否则将认证失败:

启动luci

[root@wangzirhel6cluster]# /etc/init.d/luci start

Pointyour web browser to https://localhost.localdomain:8084(or equivalent) to access luci

网页访问:

https://localhost.localdomain:8084

并使用root登录

223205799.png



创建集群:

223207426.png

此时luci管理端正在为ricciHA节点上自动安装所需要的包:

223209403.png

HA节点上可以看见有yum的进程正在运行:

223211527.png

完成后:


223534995.png

3.fence设备配置:

采用虚拟机fence设备:虚拟机与主机名的对应关系:

hostnameipkvmdomain name

desk34192.168.122.34ha1

desk33192.168.122.33ha2

desk82192.168.122.82desk82


luci主机上:

[root@wangzidocs]# yum -y install fence-virt fence-virtd fence-virtd-libvirtfence-virtd-multicast

[root@wangzidocs]# fence_virtd -c

Modulesearch path [/usr/lib64/fence-virt]:

Availablebackends:

libvirt 0.1

Availablelisteners:

multicast 1.1


Listenermodules are responsible for accepting requests

fromfencing clients.


Listenermodule [multicast]:


Themulticast listener module is designed for use environments

wherethe guests and hosts may communicate over a network using

multicast.


Themulticast address is the address that a client will use to

sendfencing requests to fence_virtd.

MulticastIP Address [225.0.0.12]:


Usingipv4 as family.


MulticastIP Port [1229]:


Settinga preferred interface causes fence_virtd to listen only

on thatinterface. Normally, it listens on the default network

interface. In environments where the virtual machines are

usingthe host machine as a gateway, this *must* be set

(typicallyto virbr0).

Set to'none' for no interface.


Interface[none]: virbr0 #根据主机网卡配置而定或是br0

如果虚拟机与真机之间使用NAT选择virbr0,使用的桥接选择br0


The keyfile is the shared key information which is used to

authenticatefencing requests. The contents of this file must

bedistributed to each physical host and virtual machine within

acluster.


KeyFile [/etc/cluster/fence_xvm.key]:


Backendmodules are responsible for routing requests to

theappropriate hypervisor or management layer.


Backendmodule [checkpoint]: libvirt

Thelibvirt backend module is designed for single desktops or

servers. Do not use in environments where virtual machines

may bemigrated between hosts.


LibvirtURI [qemu:///system]:


Configurationcomplete.


===Begin Configuration ===

fence_virtd{

listener= "multicast";

backend= "libvirt";

module_path= "/usr/lib64/fence-virt";

}

listeners{

multicast{

key_file= "/etc/cluster/fence_xvm.key";

address= "225.0.0.12";

family= "ipv4";

port= "1229";

interface= "virbr0";

}


}

backends{

libvirt{

uri ="qemu:///system";

}


}



=== EndConfiguration ===

Replace/etc/fence_virt.conf with the above [y/N]? y

luci主机上的cluster配置文件位置:/etc/fence_virt.conf


[root@wangzidocs]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128count=1


[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk33:/etc/cluster/

[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk34:/etc/cluster/

[root@wangzicluster]# scp /etc/cluster/fence_xvm.key desk82:/etc/cluster/


[root@wangzicluster]# /etc/init.d/fence_virtd start

[root@wangzicluster]# netstat -anplu | grep fence

udp 0 0 0.0.0.0:1229 0.0.0.0:* 10601/fence_virtd

[root@wangzicluster]# fence_xvm -H vm4 -o reboot #检查fence是否可以控制虚拟机如果可以,对应的虚拟机将会重启;

添加fence设备:

223619500.png

在网页上所做的一切操作都会写在各个节点的/etc/cluster/cluster.conf中:

[root@desk82cluster]# cat cluster.conf

<?xmlversion="1.0"?>

<clusterconfig_version="2" name="wangzi_1">

<clusternodes>

<clusternodename="192.168.122.34" nodeid="1"/>

<clusternodename="192.168.122.33" nodeid="2"/>

<clusternodename="192.168.122.82" nodeid="3"/>

</clusternodes>

<fencedevices>

<fencedeviceagent="fence_xvm" name="vmfence"/>

</fencedevices>

</cluster>

在个Node上:

223621332.png

223623768.png

添加错误域:

223625658.png

“Priority”为优先级,越小优先级越高:

NoFailback 为服务不回切(默认为回切)


添加资源:

223627780.png

223630637.png

ip为一个虚拟的浮动ip,用于外界访问。将会浮动出现在后端提供服务的HA节点上;

最后一行的数字越小,浮动ip切换的速度越快;

httpd服务必须是自己在HA节点上提前安装,但不要启动


添加服务组:

223632590.png

在服务组apsche下添加资源刚才添加的资源ipadress httpd

223958527.png

可看见集群已经自动将192.168.122.34httpd启动了。

[root@desk34~]# /etc/init.d/httpd status

httpd(pid 14453) is running...


测试:

[root@desk34~]# clustat

ClusterStatus for wangzi_1 @ Sat Sep 7 02:52:18 2013

MemberStatus: Quorate

MemberName

ID Status

---------- ---- ------

192.168.122.34 1 Online, Local,rgmanager

192.168.122.33 2 Online, rgmanager

192.168.122.82 3 Online, rgmanager


Service Name Owner (Last) State

------- ---- ----- ------ -----

service:apsche 192.168.122.34 started

1.)关闭httpd服务

[root@desk34~]# /etc/init.d/httpd stop


224000141.png

224002551.png

desk33上将会出现192.168.122.122的浮动ip

[root@desk33~]# ip addr show

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdiscpfifo_fast state UP qlen 1000

link/ether 52:54:00:d0:fe:21 brd ff:ff:ff:ff:ff:ff

inet 192.168.122.33/24 brd 192.168.122.255 scope global eth0

inet 192.168.122.122/24 scope global secondaryeth0

如果再将desk33上的httpd关掉,将会切换到desk82上;

如果将desk34httpd启动,浮动将会回到desk34上,因为desk34的优先级最高,并且设置了服务回切

2)断网模拟:

[root@desk34~]# ifconfig eth0 down

desk34会重启服务将会切换到desk33上;

desk34重启完毕后,服务会回切到desk34上。

3)内核崩溃:

[root@desk34~]# echo c > /proc/sysrq-trigger

224004122.png

224006919.png

主机重启,服务切换到desk33


西安石油大学

王兹银

[email protected]


你可能感兴趣的:(rhcs,luci,ricci,红帽HA)