服务器端与客户端都需要安装:
nfs-utils:NFS主程序
portmap:RPC主程序
- yum -y install portmap nfs-utils
配置
- vi /etc/exports
- /data/webnfs 10.1.88.0/24(rw,sync,anonuid=501,anongid=501)
- service portmap start
- service nfs start
注意:必须先启动portmap,因为portmap是一直在服务器里启动的,而nfs需要heartbeat来管理
- chkconfig --level 345nfs off
- chkconfig --level 345 portmap on
根据我朋友公司的实际情况,我决定启动nfs锁,虽然nfs性能降低,但能保证数据的完整性
- /sbin/rpc.lockd
- echo "/sbin/rpc.lockd" /etc/rc.local
- #!/bin/bash
- NFS="/etc/rc.d/init.d/nfs"
- NFSPID="/sbin/pidof nfsd`"
- case $1 in
- start)
- $NFS start;
- ;;
- stop)
- $NFS stop;
- if [ "$NFSPID" != " " ];then
- for NFSPID in $NFSPID
- do
- /bin/kill -9 $NFSPID;
- done
- fi
- ;;
- *)
- echo "Usage:`basename $0` {start|stop }"
- ;;
- esac
- chmod 755 nfs
- #!/bin/bash
- /bin/mount -t nfs -o nosuid,noexec,nodev,rw,nouser,noauto,bg,hard,nointr,rsize=32k,wsize=32k,tcp 10.1.88.198:/data/nfs /data/www
- chmod 755 mountnfs.sh
- echo "/data/shell/mountnfs.sh" /etc/rc.local
- [root@test6/]# cat /etc/ nfs.res
- #
- # please have a a look at the example configuration file in
- # /usr/share/doc/drbd83/drbd.conf
- #
- #
- # please have a a look at the example configuration file in
- # /usr/share/doc/drbd83/drbd.conf
- global {
- # minor-count 64;
- # dialog-refresh 5; # 5 seconds
- # disable-ip-verification;
- usage-count no;
- }
- common {
- syncer { rate 100M; }
- }
- resource nfs {
- protocol C;
- handlers {
- pri-on-incon-degr "echo o > /proc/sysrq-trigger ; halt -f";
- pri-lost-after-sb "echo o > /proc/sysrq-trigger ; halt -f";
- local-io-error "echo o > /proc/sysrq-trigger ; halt -f";
- fence-peer "/usr/lib64/heartbeat/drbd-peer-outdater -t 5";
- pri-lost "echo pri-lost. Have a look at the log files. | mail -s 'DRBD Alert' [email protected]";
- split-brain "/usr/lib/drbd/notify-split-brain.sh [email protected]";
- out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh [email protected]";
- }
- net {
- # timeout 60;
- # connect-int 10;
- # ping-int 10;
- # max-buffers 2048;
- # max-epoch-size 2048;
- cram-hmac-alg "sha1";
- shared-secret "MySQL-HA";
- }
- disk {
- on-io-error detach;
- fencing resource-only;
- }
- startup {
- wfc-timeout 120;
- degr-wfc-timeout 120;
- }
- on test6 {
- device /dev/drbd2;
- disk /dev/sda8;
- address 10.1.88.175:7788;
- meta-disk internal;
- }
- on test7 {
- device /dev/drbd2;
- disk /dev/sda8;
- address 10.1.88.179:7788;
- meta-disk internal;
- }
- [root@test6 ~]# grep -v "^#" /etc/ha.d/haresources
- test6 IPaddr::10.1.88.199/24/eth0:1 mysqld_umount mysqld
- test6 IPaddr::10.1.88.198/24/eth0:1 drbddisk::nfs Filesystem::/dev/drbd2::/data/drbd/nfs::ext3 nfs
- cp nfs /etc/ha.d/resource.d/
- chmod 755 /etc/ha.d/resource.d/nfs
下图是各个服务在vps里的地址
由于vps是256m内存,所以drbd+heartbeat安装不成功,而且负载均衡层的keepalived也不成功,但由于负载均衡层每台服务器都有双ip,所以我通过每台服务器的第二个ip左右负载均衡双主的vip,作为dns轮询的ip,这样也能实现负载均衡双主的功能(实在是在vps没有办法实现我才这样,而且vps好贵啊,如果不是为了给大家做展示与测试,我才不买;公司的测试机器也没有外网ip,所以才用vps给大家展示,请见谅!)
下图是dns做轮询的查看图
从图中可以看到,www.netnvo.com这个域名对于2个ip,分别是184.82.79.165与184.82.79.161,实现了dns轮询。
由于我vps的内存是256m的,所以在进行ab与webbench的时候,最多只能到7000并发,在我做测试的测试环境里,并发到1w是没有问题,这里的vps架构只是为了大家的研究,测试就别做太多的并发,否则会把vps的内存给消耗没的,大家注意这点!