keepalived+nginx proxy
直接开始做实验吧。跟着下面一步步做,基本不会出问题。
- 准备五台设备
[root@node1 ~]# httpd服务器192.168.40.184
[root@node2 ~]# httpd服务器192.168.40.185
[root@node3 ~]# keepalived+nginx服务器192.168.40.186 Vip:192.168.100.1
[root@node4 ~]# keepalived+nginx服务器192.168.40.187 Vip:192.168.100.2
[root@localhost ~] 客户机
- 配置httpd服务器
httpd1配置
[root@node1 ~]#
[root@node1 ~]#yum install httpd -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package httpd-2.4.6-45.el7.centos.x86_64 already installed and latest version
Nothing to do
[root@node1 ~]#echo 'webserver1' > /var/www/html/index.html
[root@node1 ~]#systemctl start httpd
我这里是已经安装过hpptd,如果没有安装直接会读条安装
httpd2配置
[root@node2 ~]#
[root@node2 ~]# yum -y install httpd
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package httpd.x86_64 0:2.4.6-45.el7.centos will be installed
--> Processing Dependency: httpd-tools = 2.4.6-45.el7.centos for package: httpd-2.4.6-45.el7.cen
tos.x86_64--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-45.el7.centos.x86_64
--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.centos.x86
_64--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.centos.x86_64
--> Running transaction check
---> Package apr.x86_64 0:1.4.8-3.el7 will be installed
---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed
---> Package httpd-tools.x86_64 0:2.4.6-45.el7.centos will be installed
---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================
Package Arch Version Repository Size
================================================================================================
Installing:
httpd x86_64 2.4.6-45.el7.centos base 2.7 M
Installing for dependencies:
apr x86_64 1.4.8-3.el7 base 103 k
apr-util x86_64 1.5.2-6.el7 base 92 k
httpd-tools x86_64 2.4.6-45.el7.centos base 84 k
mailcap noarch 2.1.41-2.el7 base 31 k
Transaction Summary
================================================================================================
Install 1 Package (+4 Dependent packages)
Total download size: 3.0 M
Installed size: 10 M
Downloading packages:
------------------------------------------------------------------------------------------------
Total 24 MB/s | 3.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : apr-1.4.8-3.el7.x86_64 1/5
Installing : apr-util-1.5.2-6.el7.x86_64 2/5
Installing : httpd-tools-2.4.6-45.el7.centos.x86_64 3/5
Installing : mailcap-2.1.41-2.el7.noarch 4/5
Installing : httpd-2.4.6-45.el7.centos.x86_64 5/5
Verifying : httpd-tools-2.4.6-45.el7.centos.x86_64 1/5
Verifying : mailcap-2.1.41-2.el7.noarch 2/5
Verifying : apr-1.4.8-3.el7.x86_64 3/5
Verifying : httpd-2.4.6-45.el7.centos.x86_64 4/5
Verifying : apr-util-1.5.2-6.el7.x86_64 5/5
Installed:
httpd.x86_64 0:2.4.6-45.el7.centos
Dependency Installed:
apr.x86_64 0:1.4.8-3.el7 apr-util.x86_64 0:1.5.2-6.el7
httpd-tools.x86_64 0:2.4.6-45.el7.centos mailcap.noarch 0:2.1.41-2.el7
Complete!
[root@node2 ~]# echo 'webserver2' > /var/www/html/index.html
[root@node2 ~]# systemctl start httpd
- 配置nginx服务器
[root@node3 ~]#
[root@node3 ~]# yum install nginx -y
[root@node3 ~]# vim /etc/nginx/nginx.conf
打开配置文件,在http{}里直接添加:
upstream httpds {
server 192.168.40.184;
server 192.168.40.185;
}
然后找到下面location这一行改为:
location / {
proxy_pass http://httpds:
}
[root@node3 ~]# systemctl start nginx
测试:
[root@node3 ~]# curl 192.168.40.186
webserver1
[root@node3 ~]# curl 192.168.40.186
webserver2
[root@node4 ~]#
[root@node4 ~]# yum install nginx -y
[root@node4 ~]# vim /etc/nginx/nginx.conf
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
upstream httpds {
server 192.168.40.184;
server 192.168.40.185;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://httpds;
}
[root@node4 ~]# systemctl start nginx
测试:
[root@node4 ~]# curl 192.168.40.187
webserver1
[root@node4 ~]# curl 192.168.40.187
webserver2
- 配置keepalived
[root@node3 ~]#
[root@node3 ~]# yum -y install keepalived
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package keepalived.x86_64 0:1.2.13-8.el7 will be installed
--> Processing Dependency: libsensors.so.4()(64bit) for package: keepalived-1.2.13-8.el7.x86_64
--> Processing Dependency: libnetsnmpmibs.so.31()(64bit) for package: keepalived-1.2.13-8.el7.x8
6_64--> Processing Dependency: libnetsnmpagent.so.31()(64bit) for package: keepalived-1.2.13-8.el7.x
86_64--> Processing Dependency: libnetsnmp.so.31()(64bit) for package: keepalived-1.2.13-8.el7.x86_64
--> Running transaction check
---> Package lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7 will be installed
---> Package net-snmp-agent-libs.x86_64 1:5.7.2-24.el7_2.1 will be installed
---> Package net-snmp-libs.x86_64 1:5.7.2-24.el7_2.1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================
Package Arch Version Repository Size
================================================================================================
Installing:
keepalived x86_64 1.2.13-8.el7 base 224 k
Installing for dependencies:
lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 base 41 k
net-snmp-agent-libs x86_64 1:5.7.2-24.el7_2.1 base 702 k
net-snmp-libs x86_64 1:5.7.2-24.el7_2.1 base 747 k
Transaction Summary
================================================================================================
Install 1 Package (+3 Dependent packages)
Total download size: 1.7 M
Installed size: 5.6 M
Downloading packages:
------------------------------------------------------------------------------------------------
Total 15 MB/s | 1.7 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 1/4
Installing : 1:net-snmp-libs-5.7.2-24.el7_2.1.x86_64 2/4
Installing : 1:net-snmp-agent-libs-5.7.2-24.el7_2.1.x86_64 3/4
Installing : keepalived-1.2.13-8.el7.x86_64 4/4
Verifying : 1:net-snmp-agent-libs-5.7.2-24.el7_2.1.x86_64 1/4
Verifying : keepalived-1.2.13-8.el7.x86_64 2/4
Verifying : 1:net-snmp-libs-5.7.2-24.el7_2.1.x86_64 3/4
Verifying : lm_sensors-libs-3.4.0-4.20160601gitf9185e5.el7.x86_64 4/4
Installed:
keepalived.x86_64 0:1.2.13-8.el7
Dependency Installed:
lm_sensors-libs.x86_64 0:3.4.0-4.20160601gitf9185e5.el7
net-snmp-agent-libs.x86_64 1:5.7.2-24.el7_2.1
net-snmp-libs.x86_64 1:5.7.2-24.el7_2.1
Complete!
[root@node3 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id router1
vrrp_mcast_group4 224.1.1.1
}
vrrp_instance VI_1 {
state MASTER
interface ens37
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.40.101
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens37
virtual_router_id 2
priority 95
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.40.102
}
}
[root@node3 ~]# systemctl start keepalived
[root@node4 ~]#
[root@node4 ~]# yum -y install keepalived
[root@node4 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id router2
vrrp_mcast_group4 224.1.1.1
}
vrrp_instance VI_1 {
state BACKUP
interface ens37
virtual_router_id 1
priority 95
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.40.101
}
}
vrrp_instance VI_2 {
state MASTER
interface ens37
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.40.102
}
}
[root@node4 ~]# systemctl start keepalived
- 测试
上面已经把基础都配置好了,现在开始启动做测试。
[root@node3 ~]#
[root@node3 ~]# ip a
3: ens37: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:26:5d:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.186/24 brd 192.168.40.255 scope global dynamic ens37
valid_lft 1309sec preferred_lft 1309sec
inet 192.168.40.101/32 scope global ens37
valid_lft forever preferred_lft forever
inet6 fe80::1446:88c6:b45f:64e9/64 scope link
valid_lft forever preferred_lft forever
通过ip a可以看到192.168.40.101已经配置上去了,这个上面就算成功了,下面在看看另一台。
[root@node4 ~]#
[root@node4 ~]# ip a
3: ens37: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:28:c1:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.187/24 brd 192.168.40.255 scope global dynamic ens37
valid_lft 1707sec preferred_lft 1707sec
inet 192.168.40.102/32 scope global ens37
valid_lft forever preferred_lft forever
inet6 fe80::ad66:2c63:7f34:fd2c/64 scope link
val id_lft forever preferred_lft forever
这个也出现了,也成功了
上面使用ip a名称看来都已经成功了,另外两台虚拟机上面都配置了后端httpd1与2的代理服务,那么我们现在用客户机访问测试一下是否正常。
[root@localhost ~]
[root@localhost ~]# for i in {1..10};do curl http://192.168.40.101/;done
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
访问192.168.40.101没问题,试试102
[root@localhost ~]# for i in {1..10};do curl http://192.168.40.102/;done
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
192.168.40.102也没问题
测试到这里说明这个实验已经完成,但是还需要测试如果一台主机宕机了,其他两个虚拟Ip还可以正常工作吗,那么我们在来测试一下
[root@node3 ~]#
[root@node3 ~]# systemctl stop keepalived #停止keeplived
[root@node3 ~]# ip a
3: ens37: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:26:5d:87 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.186/24 brd 192.168.40.255 scope global dynamic ens37
valid_lft 1309sec preferred_lft 1309sec
inet6 fe80::1446:88c6:b45f:64e9/64 scope link
valid_lft forever preferred_lft forever
服务停掉以后可以看到vip已经不见了,现在应该是转移到node2上面去了,我们去看看
[root@node4 ~]#
[root@node4 ~]# ip a
3: ens37: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:28:c1:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.187/24 brd 192.168.40.255 scope global dynamic ens37
valid_lft 1707sec preferred_lft 1707sec
inet 192.168.40.101/32 scope global ens37
valid_lft forever preferred_lft forever
inet 192.168.40.102/32 scope global ens37
valid_lft forever preferred_lft forever
inet6 fe80::ad66:2c63:7f34:fd2c/64 scope link
val id_lft forever preferred_lft forever
现在node4上面出现了2个vip,那么网页还能正常工作不,我们再去用客户机访问一下。
[root@localhost ~]#
[root@localhost ~]# for i in {1..10};do curl http://192.168.40.101/;done
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
[root@localhost ~]# for i in {1..10};do curl http://192.168.40.102/;done
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
webserver1
webserver2
两个ip依然可以正常访问。
ip转移的原理就是配置文件中权重的设置,虚拟主机在广播域内每秒都会更新优先级,权重值,一旦发现在权重较高者就将vip添加到相应的设备上,为了更加清楚的看到过程,我们可以用监控来查看信息。
[root@node4 ~]#
[root@node4 ~]# yum -y install tcpdump
[root@node4 ~]# tcpdump -i ens37 -nn host 224.1.1.1
16:03:06.372459 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 95, authtype simple, intvl 1s, length 20
16:03:06.896593 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:03:07.373876 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 95, authtype simple, intvl 1s, length 20
16:03:07.898220 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:03:08.374548 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 95, authtype simple, intvl 1s, length 20
16:03:08.900079 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:03:09.377419 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 95, authtype simple, intvl 1s, length 20
16:03:09.903509 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
以上主要观看prio值,在node3没有启动时(上面做测试的时候关闭了),node4每秒广播一次优先级为95的vrid 1与优先级100的vrid 2,如果当我启动node3的keepalived可以看到监控中,node3与node4同时广播了一次自己的优先级,node4的vrid 1为95,node3的为100,然后可以发现vrid 1就变成了node3的100了,不广播node4的95了,因为node3的优先级较高。
[root@node4 ~]# tcpdump -i ens37 -nn host 224.1.1.1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ens37, link-type EN10MB (Ethernet), capture size 65535 bytes
16:17:01.406785 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 95, authtype simple, intvl 1s, length 20
16:17:01.730866 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:20:58.304585 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:20:59.118167 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:20:59.308064 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:21:00.121610 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:21:00.309219 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:21:01.124993 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:21:01.312567 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:21:02.127997 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:21:02.315574 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:21:03.129976 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:21:03.317468 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
16:21:04.134008 IP 192.168.40.186 > 224.1.1.1: VRRPv2, Advertisement, vrid 1, prio 100, authtype simple, intvl 1s, length 20
16:21:04.321042 IP 192.168.40.187 > 224.1.1.1: VRRPv2, Advertisement, vrid 2, prio 100, authtype simple, intvl 1s, length 20
测试到这里就结束了,跟node3一样,如果node4宕机,他的ip就会转到node3上,优先级谁高谁先输出。