【 建立基于仲裁的cluster 】
实验环境:
192.168.0.10
node10.zhou.com
gnbd
192.168.0.11
node11.zhou.com
apache1
192.168.0.12
node12.zhou.com
apache2
1、创建gnbd共享存储。
192.168.0.10:
gnbd_serv -n
[root@node10 /]# gnbd_export -c -d /dev/sda8 -e qdisk1
gnbd_export: created GNBD qdisk1 serving file /dev/sda8
192.168.0.11:
[root@node11 cluster]# modprobe gnbd
[root@node11 cluster]# gnbd_import -i 192.168.0.10 -n
gnbd_import: created directory /dev/gnbd
gnbd_import: created gnbd device qdisk1
gnbd_recvd: gnbd_recvd started
[root@node11 gnbd]# mkqdisk -c /dev/gnbd/qdisk1 -l qdisk1
mkqdisk v0.6.0
Writing new quorum disk label 'qdisk1' to /dev/gnbd/qdisk1.
WARNING: About to destroy all data on /dev/gnbd/qdisk1; proceed [N/y] ? y
Initializing status block for node 1...
Initializing status block for node 2...
Initializing status block for node 3...
Initializing status block for node 4...
Initializing status block for node 5...
Initializing status block for node 6...
Initializing status block for node 7...
Initializing status block for node 8...
Initializing status block for node 9...
Initializing status block for node 10...
Initializing status block for node 11...
Initializing status block for node 12...
Initializing status block for node 13...
Initializing status block for node 14...
Initializing status block for node 15...
Initializing status block for node 16...
[root@node11 gnbd]#
[root@node11 cluster]# ls /dev/gnbd/
qdisk1
192.168.0.12:
[root@node12 cluster]# modprobe gnbd
[root@node12 cluster]# gnbd_import -i 192.168.0.10 -n
gnbd_import: created directory /dev/gnbd
gnbd_import: created gnbd device qdisk1
gnbd_recvd: gnbd_recvd started
[root@node12 cluster]# ls /dev/gnbd/
qdisk1
192.168.0.11:
system-config-cluster
【 相应截图配图 】
scp /etc/cluster/cluster.conf 192.168.0.12:/etc/cluster/cluster.conf
在192.168.0.12上检查/etc/cluster/下,有没有cluster.conf文件,有了就可以开启cman了。
节点上的两台机器同时依次开启 cman 服务。等待心跳连接。
[root@node11 gnbd]# service cman start
Starting cluster:
Loading modules... done
Mounting configfs... done
Starting ccsd... done
Starting cman... done
Starting daemons... done
Starting fencing... done
[ OK ]
注意、只有节点中所有节点的cman都开启了,心跳连接成功的时候,cman 才会启动成功,这时候表示集群中的所有节点间通信都是正常的。
然后两边启动资源,就可以正常使用这个集群了。
[root@node11 gnbd]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
另一台也要开启。
测试:
[root@node10 /]# ping -c 1 192.168.0.222
PING 192.168.0.222 (192.168.0.222) 56(84) bytes of data.
64 bytes from 192.168.0.222: icmp_seq=1 ttl=64 time=4.37 ms
--- 192.168.0.222 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.379/4.379/4.379/0.000 ms
用firefox来测试服务是否开启。
页面访问成功,查看监控状态,图17
测试集群:
将node11.zhou.com这个节点上的httpd服务停掉,然后查看是否会自动切换到备用机上:备用机页面使用的是红帽apache测试页。
本文出自 “周旭光_不断进取” 博客,转载请与作者联系!