LVS集群

1.Lvs_dr工作模式
2.DR模式:直连路由
client–vs–RS–client

3.lvs和iptables定义的防火墙策略谁优先
netfilter
4.ospf路由协议:工作在三层
5.raid0|raid1|raid5|raid10|
6.drdb共享存储

LVS实现负载均衡

配置yum源,ipvsadm包存在于LoadBalancer

Lvs端:Server1

*****安装ipvsadm|添加策略|设置VIP
Ipvsadm参数:
    -C  清空策略
    -L  查看集群
    -A  设定调度器
    -a  添加集群
    -t  指定为tcp协议
    -s  指定调度算法
    -r
    -g  指定为DR工作模式


[root@server1 ~]# ipvsadm -A -t 172.25.66.100:80 -s rr
[root@server1 ~]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.2:80 -g
[root@server1 ~]# ipvsadm -a -t 172.25.66.100:80 -r 172.25.66.3:80 -g
[root@server1 ~]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.66.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 ~]# /etc/init.d/ipvsadm save
ipvsadm: Saving IPVS table to /etc/sysconfig/ipvsadm:      [  OK  ]
[root@server1 ~]# ip addr add 172.25.66.100/24 dev eth1    #添加虚拟IP
[root@server1 ~]# 

客户端:Server2|Server3

*****安装htpd写测试页面|配置VIP|设置arp抑制
Arp抑制:在client访问时,后端会通过广播的形式回应,因此要做arp抑制
[root@server2 ~]# yum install  arptables_jf.x86_64 -y
[root@server2 ~]# arptables -A IN -d 172.25.66.100 -j DROP #近来丢弃
[root@server2 ~]# arptables -A OUT -s 172.25.66.100 -j mangle --mangle-ip-s 172.25.66.2    #出去伪装
[root@server2 ~]# /etc/init.d/arptables_jf save    #保存策略
Saving current rules to /etc/sysconfig/arptables:          [  OK  ]
[root@server2 ~]# 



测试:
[root@foundation66 ~]# arp -d 172.25.66.100    #清空缓存
[root@foundation66 ~]# arp -a 172.25.66.100    #查看命中的MAC地址
[root@foundation66 days3]# for i in range {1..10};do curl 172.25.66.100;done
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
[root@foundation66 days3]# arp -an | grep 100   #查看虚拟IP对应的主机地址
? (172.25.66.100) at 52:54:00:0f:0f:8f [ether] on br0
[root@foundation66 days3]#

添加健康检查机制

*lvs没有能力做后端健康检查
*ldirectord为lvs做后端补充:
    自己生成规则|在配置文件中自动添加规则,不需要用ipvsadm生成
    进行后端检测,并且效率高
*启动资源:VIP|ldirectord(更轻量级keepalived)
[root@server1 ~]# ls
ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y
[root@server1 ~]# rpm -q ldirectord-3.9.5-3.1.x86_64.rpm 
package ldirectord-3.9.5-3.1.x86_64.rpm is not installed
[root@server1 ~]# rpm -ql ldirectord-3.9.5-3.1
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz
[root@server1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf  resource.d  shellfuncs
[root@server1 ha.d]# vim ldirectord.cf 
......
 25 virtual=172.25.66.100:80
 26         real=172.25.66.2:80 gate
 27         real=172.25.66.3:80 gate
 28         fallback=127.0.0.1:80 gate      #后端都宕机时,访问本地
 29         service=http
 30         scheduler=rr
 31         #persistent=600
 32         #netmask=255.255.255.255
 33         protocol=tcp
 34         checktype=negotiate
 35         checkport=80
 36         request="index.html"
 37         #receive="Test Page"
 38         #virtualhost=www.x.y.z
.....
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.66.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 ha.d]# ipvsadm -C
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server1 ha.d]# cat /var/www/html/index.html 
正在维护中.....
[root@server1 ha.d]# /etc/init.d/ldirectord start
Starting ldirectord... success
[root@server1 ha.d]# ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.66.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.66.100:80 rr
  -> 172.25.66.2:80               Route   1      0          0         
  -> 172.25.66.3:80               Route   1      0          0         
[root@server1 ha.d]# 

测试

[root@foundation66 Desktop]# for i in range {1..10};do curl 172.25.66.100;done
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
<h1>server2<h1>
<h1>server3<h1>
***Server2停掉httpd
[root@foundation66 Desktop]# for i in range {1..5};do curl 172.25.66.100;done
<h1>server3<h1>
<h1>server3<h1>
<h1>server3<h1>
<h1>server3<h1>
<h1>server3<h1>
<h1>server3<h1>
[root@foundation66 Desktop]# 
***Server2|Server3都停掉httpd
[root@foundation66 Desktop]# for i in range {1..5};do curl 172.25.66.100;done
正在维护中.....
正在维护中.....
正在维护中.....
正在维护中.....
正在维护中.....
正在维护中.....
[root@foundation66 Desktop]# 

Heartbeat实现高可用

#配置yum源

[root@server1 ~]# ls
heartbeat-3.0.4-2.el6.x86_64.rpm        heartbeat-libs-3.0.4-2.el6.x86_64.rpm
heartbeat-devel-3.0.4-2.el6.x86_64.rpm  ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# yum install heartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpm heartbeat-devel-3.0.4-2.el6.x86_64.rpm -y
[root@server1 ~]# rpm -q heartbeat -d
[root@server1 ~]# cd /usr/share/doc/heartbeat-3.0.4/
[root@server1 heartbeat-3.0.4]# ls
apphbd.cf  AUTHORS    COPYING       ha.cf        README
authkeys   ChangeLog  COPYING.LGPL  haresources
[root@server1 heartbeat-3.0.4]# cp authkeys haresources ha.cf /etc/ha.d/
[root@server1 heartbeat-3.0.4]# cd /etc/ha.d/
[root@server1 ha.d]# chmod 600 authkeys 
[root@server1 ha.d]# ls
authkeys  harc         ldirectord.cf  README.config  shellfuncs
ha.cf     haresources  rc.d           resource.d
[root@server1 ha.d]# vim ha.cf 
.....
 34 logfacility     local0
 48 keepalive 2
 56 deadtime 30
 61 warntime 10
 71 initdead 60
 76 udpport 753
157 auto_failback on
211 node    server1 #哪个在先哪个是主
212 node    server2
220 ping 172.25.66.250
253 respawn hacluster /usr/lib64/heartbeat/ipfail
259 apiauth ipfail gid=haclient uid=hacluster
[root@server1 ha.d]# vim authkeys 
.....
 23 auth 1
 24 1 crc
.....
[root@server1 ha.d]# vim haresources 
.....
***定义虚拟IP|ldirectord|httpd(启动本机httpd)
150 server1 IPaddr::172.25.66.100/24/eth1 ldirectord httpd  
.....
[root@server1 ha.d]# /etc/init.d/heartbeat restart
[root@server1 ha.d]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:0f:0f:8f brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.1/24 brd 172.25.66.255 scope global eth1
    inet 172.25.0.100/24 scope global eth1
    inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth1
    inet6 fe80::5054:ff:fe0f:f8f/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 ha.d]# /etc/init.d/httpd status
httpd (pid  5861) is running...
[root@server1 ha.d]# 



#备
[root@server1 ~]# scp heartbeat-3.0.4-2.el6.x86_64.rpm heartbeat-devel-3.0.4-2.el6.x86_64.rpm heartbeat-libs-3.0.4-2.el6.x86_64.rpm [email protected]:/root/
[root@server1 ~]# scp /etc/yum.repos.d/rhel-source.repo [email protected]:/etc/yum.repos.d/
[root@server4 ~]# ls
heartbeat-3.0.4-2.el6.x86_64.rpm        heartbeat-libs-3.0.4-2.el6.x86_64.rpm
heartbeat-devel-3.0.4-2.el6.x86_64.rpm  ldirectord-3.9.5-3.1.x86_64.rpm
[root@server4 ~]# yum install ldirectord-3.9.5-3.1.x86_64.rpm -y

测试:

[root@server1 ha.d]# /etc/init.d/heartbeat restart
Stopping High-Availability services: Done.

Waiting to allow resource takeover to complete:Done.

Starting High-Availability services: INFO:  Resource is stopped
Done.
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.66.100:http rr
  -> server2:http                 Route   1      0          0         
  -> server3:http                 Route   1      0          0         
[root@server1 ~]# /etc/init.d/heartbeat stop
Stopping High-Availability services: Done.

[root@server1 ~]# 
#完成VIP迁移
[root@server4 ~]# /etc/init.d/heartbeat status
heartbeat OK [pid 5007 et al] is running on server4 [server4]...
[root@server4 ~]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth1:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:8c:51:47 brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.4/24 brd 172.25.66.255 scope global eth1
    inet 172.25.66.100/24 brd 172.25.66.255 scope global secondary eth1
    inet6 fe80::5054:ff:fe8c:5147/64 scope link 
       valid_lft forever preferred_lft forever
[root@server4 ~]# 

Drbd共享存储

参考文档:http://freeloda.blog.51cto.com/2033581/1275384
    drdb配置.pdf

[root@server1 ~]# yum install gcc flex rpm-build kernel-devel -y
[root@server1 ~]# tar zxf drbd-8.4.3.tar.gz 
[root@server1 ~]# cd drbd-8.4.3
[root@server1 drbd-8.4.3]# ./configure --with-km --enable-spec
[root@server1 ~]# cd rpmbuild/
[root@server1 rpmbuild]# ls
BUILD  BUILDROOT  RPMS  SOURCES  SPECS  SRPMS
[root@server1 rpmbuild]# cd
[root@server1 ~]# cp drbd-8.4.3.tar.gz rpmbuild/SOURCES/
[root@server1 ~]# cd drbd-8.4.3
[root@server1 drbd-8.4.3]# rpmbuild -bb drbd.spec
[root@server1 drbd-8.4.3]# rpmbuild -bb drbd-km.spec
[root@server1 drbd-8.4.3]# cd ~/rpmbuild/RPMS/x86_64/
[root@server1 x86_64]# ls
drbd-8.4.3-2.el6.x86_64.rpm
drbd-bash-completion-8.4.3-2.el6.x86_64.rpm
drbd-heartbeat-8.4.3-2.el6.x86_64.rpm
drbd-km-2.6.32_431.el6.x86_64-8.4.3-2.el6.x86_64.rpm
drbd-pacemaker-8.4.3-2.el6.x86_64.rpm
drbd-udev-8.4.3-2.el6.x86_64.rpm
drbd-utils-8.4.3-2.el6.x86_64.rpm
drbd-xen-8.4.3-2.el6.x86_64.rpm
[root@server1 x86_64]# rpm -ivh * 
Preparing...                ########################################### [100%]
   1:drbd-utils             ########################################### [ 13%]
   2:drbd-bash-completion   ########################################### [ 25%]
   3:drbd-heartbeat         ########################################### [ 38%]
   4:drbd-pacemaker         ########################################### [ 50%]
   5:drbd-udev              ########################################### [ 63%]
   6:drbd-xen               ########################################### [ 75%]
   7:drbd                   ########################################### [ 88%]
   8:drbd-km-2.6.32_431.el6.########################################### [100%]
[root@server1 x86_64]# 


#在server4上安装
[root@server1 x86_64]# scp * [email protected]:/root/
[root@server1 ~]# cd /etc/drbd.d/
[root@server1 drbd.d]# vim demo.res
.....
resource demo {
meta-disk internal;
device /dev/drbd1;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
on server1 {
disk /dev/vdb;
address 172.25.66.1:7789;
}
on server4 {
disk /dev/vdb;
address 172.25.66.4:7789;
}
}
.....
[root@server1 drbd.d]# ls
demo.res  global_common.conf
[root@server1 drbd.d]# scp demo.res server4:/etc/drbd.d/
*两边都做初始化|启动
[root@server1 drbd.d]# drbdadm create-md demo
[root@server1 drbd.d]# /etc/init.d/drbd start
Starting DRBD resources: [
     create res: demo
   prepare disk: demo
    adjust disk: demo
     adjust net: demo
]
.......
[root@server1 drbd.d]#


[root@server1 drbd.d]# cat /proc/drbd 
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@server1, 2017-09-24 16:32:01

 1: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
    ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1048508
[root@server1 drbd.d]# 

设置主从,让其同步
[root@server1 ~]# drbdsetup /dev/drbd1 primary --force #设置主节
[root@server1 ~]# cat /proc/drbd   #数据同步|可看到此时的设备是主还是备
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by root@server1, 2017-09-24 16:32:01

 1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
    ns:1048508 nr:0 dw:0 dr:1049172 al:0 bm:64 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
[root@server1 ~]# mkfs.ext4 /dev/drbd1     #创建文件系统
[root@server1 ~]# mount /dev/drbd1 /var/www/html/
[root@server1 ~]# cp /etc/passwd /var/www/html/
[root@server1 ~]# ls /var/www/html/
lost+found  passwd
[root@server1 ~]# 

...

共享存储:

NAS:
SAN:

nfs:文件级别共享
块设备:
只有块设备才能做格式化

集群文件系统:(只能在文件系统中使用)
    GFS2|OCFS2|clvm2(逻辑卷:实现文件动态扩展,但是恢复起来难)
    两节点先构建成高可用集群,定义成分布式锁管理器(实现让多个进程之间共享文件,实现分布式锁的功能)


http://wuhf2015.blog.51cto.com/8213008/1654648

你可能感兴趣的:(LVS集群)