实验:实现高可用的LVS-DR模型
1、准备两台RS服务器
2、将两台lVS安装httpd或nginx,用来做sorry server
3、定义RS服务器
在后端服务器RS1写配置脚本
执行脚本后,ifconfig
之后脚本传给RS2,执行此脚本,同样存在lo:0 10.0.56.10
4、LVS服务器安装ipvsadm,之后添加虚拟网络
ifconfig ens33:0 10.0.56.10 netmask 255.255.255.255 broadcast 10.0.56.10 up
5、配置添加RS服务器被lvs调度
6、测试,注意测试机加上到10.0.56.0/24的路由,此时基本的已完成
7、LVS1下线,配置LVS2
LVS1:ifconfig ens33:0 down
LVS2:
ifconfig ens33:0 10.0.56.10 netmask 255.255.255.255 broadcast 10.0.56.10 up
ipvsadm -A -t 10.0.56.10:80 -s rr
ipvsadm -a -t 10.0.56.10:80 -r 192.168.239.72 -g
ipvsadm -a -t 10.0.56.10:80 -r 192.168.239.73 -g
之后结果会等一个间隔时间后才会出现
8、配置keepalived
清空ipvsadm的规则,ipvsadm -C
两个LVS主机上均有在/etc/keepalived/keepalived.conf:
virtual_ipaddress {
10.0.56.10/24 dev ens33 label ens33:1
}
virtual_server 10.0.56.10 80 {
delay_loop 2
lb_algo wrr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.239.72 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.239.73 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
之后两台LVS全部停止keepalived
先启动LVS1的keepalived
测试1(如果光标闪烁,查看LVS服务器的iptables -vnL中是否有个DROP,由此规则原因是keepalive.conf中有vrrp_strict)
测试2
测试3
测试4
暂停片刻后恢复调度
附:整个LVS的keepalive.conf
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id lvs1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
vrrp_mcast_group4 224.0.156.18
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass passwd
}
virtual_ipaddress {
10.0.56.10/24 dev ens33 label ens33:1
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 10.0.56.10 80 {
delay_loop 2
lb_algo wrr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 192.168.239.72 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.239.73 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
实验:keepalive实现高并发nginx代理nginx服务器(单主模型)
1、简易拓扑图
2、在两台keepalive服务器中更改配置/etc/keepalive/keepalive.conf
做一步测试一步,说明两台keepalive的nginx配置正确
2、/etc/keepalive/keepalive.conf的配置
systemclt restart keepalived
3、测试1
测试2
此时依旧成功
测试3
加入nginx检测脚本,判断nginx进程是否存在
测试4
实验:keepalive实现高并发nginx代理nginx服务器(双主模型)
1、简易拓扑图
2、配置ka1
配置ka2
配置保存后重启keepalive
3、配置RS服务器
4、配置ka1的nginx.conf
配置ka 2的nginx,和ka 1的基本一致,最好两个default的server_name和proxy_pass换到各自对应的RS主机位置
5、客户机配置/etc/hosts解析
6、测试
测试结果
测试结果