Nginx+Keepalived简单构建高可用集群

以前一直用heartbeat或corosync+pacemaker构建高可用集群,现在发现keepalived实现起来更简单。
keepalived的master向backup发送广播,当backup一段时间收不到对方传来的VRRP广播时,backup会通过竞选一个master,master就会重新持有资源。具体的理论知识参见
http://bbs.ywlm.net/thread-790-1-1.html

实验目标:2台Nginx+Keepalived 2台Lamp构建高可用Web集群

规划:

 
 
  1. ng1.laoguang.me 192.168.1.22 ng1  

  2. ng2.laoguang.me 192.168.1.23 ng2  

  3. lamp1.laoguang.me   192.168.1.24 lamp1  

  4. lamp2.laoguang.me   192.168.1.25 lamp2

拓扑:

104100472.jpg

一.基本环境准备
ng1,ng2上安装nginx
lamp1,lamp2上构建LAMP或只安装httpd,我只安装了Httpd,这里不给大家演示了,有需要请看我的其它博文,更改lamp1,lamp2的index.html的内容分别为lamp1和lamp2,以容易区分,实际集群中内容应该是一致的,由共享存储提供。

二.ng1,ng2上安装配置keepalived
下载地址:http://www.keepalived.org/download.html
2.1 安装keepalived

 
 
  1. tar xvf keepalived-1.2.7.tar.gz  

  2. cd keepalived-1.2.7  

  3. ./configure --prefix=/usr/local/keepalived      

  4. ##可能会提示安装popt-devel包,yum即可

  5. make && make install

2.2 整理配置文件与脚本

 
 
  1. mkdir /etc/keepalived  

  2. ##keepalived默认配置文件从/etc/keepalived下读取

  3. cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

  4. ##就一个二进制文件,直接拷贝过去即可,多的话就更改PATH吧

  5. cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

  6. ##脚本的额外配置文件读取位置  

  7. cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/  

  8. ##启动脚本你懂得

  9. cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

  10. ##我们关键的keepalived配置文件

2.3 修改ng1的/etc/keepalived/keepalived.conf

 
 
  1. ! Configuration File for keepalived  

  2. global_defs {  

  3.   notification_email {  

  4.     [email protected]         ##出故障发送邮件给谁  

  5.   }    

  6.   notification_email_from keepalived@localhost ##故障用哪个邮箱发送邮件  

  7.   smtp_server 127.0.0.1   ##SMTP_Server IP

  8.   smtp_connect_timeout 30 ##超时时间

  9.   router_id LVS_DEVEL     ##服务器标识  

  10. }  

  11. vrrp_instance VI_1 {  

  12.    state BACKUP            

  13. ##状态,都为BACKUP,它们会推选Master,如果你写MASTER,它就会是Master,

  14.    ##当Master故障时Backup会成为Master,当原来的Master恢复后,原来的Master会成为Master  

  15.    interface eth0       ##发送VRRP的接口,仔细看你的是不是eth0

  16.    virtual_router_id 51  ##虚拟路由标识,同一个组应该用一个,即Master与Backup同一个

  17.    priority 100   ##重要的优先级哦  

  18.    nopreempt      ##不抢占,一个故障时,重启后恢复后不抢占意资源

  19.    advert_int 1   ##同步间隔时长

  20.    authentication {             ##认证  

  21.        auth_type PASS            ##认证方式  

  22.        auth_pass www.laoguang.me ##密钥

  23.    }  

  24.    virtual_ipaddress {  

  25.        192.168.1.18/24 dev eth0              ##VIP  

  26.    }  

  27. }  

  28. ##后面的删除吧,LVS上才有用

拷贝到ng2上一份,只修改priority 90 即可

 
 
  1. scp /etc/keepalived/keepalived.conf 192.168.1.23:/etc/keepalived/

  2. ##Ng2上

  3. vi /etc/keepalived/keepalived.conf  priority 90   ##其它一致

2.4 ng1,ng2上启动keepalived

 
 
  1. service keepalived start

查看日志

 
 
  1. tail /var/log/messages  

  2. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering BACKUP STATE  

  3. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]  

  4. Nov 27 08:07:54 localhost Keepalived_healthcheckers[41870]: Using LinkWatch kernel netlink reflector...  

  5. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) forcing a new MASTER election  

  6. Nov 27 08:07:55 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Transition to MASTER STATE  

  7. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering MASTER STATE  

  8. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) setting protocol VIPs.  

  9. Nov 27 08:07:56 localhost Keepalived_healthcheckers[41870]: Netlink reflector reports IP 192.168.1.18 added  

  10. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18  

  11. Nov 27 08:08:01 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18

查看vip绑定到哪台机器上了

 
 
  1. ip addr     ##ng1上  

  2. ....省略  

  3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  4.    link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff  

  5.    inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0  

  6.    inet 192.168.1.18/32 scope global eth0  

  7.    inet6 fe80::20c:29ff:fee8:900b/64 scope link  

  8.       valid_lft forever preferred_lft forever  

由此可知vip绑定到ng1上了
三,Keepalived测试

3.1 关闭ng1上的keepalived或者直接关闭ng1 查看vip转移情况

 
 
  1. service keepalived stop  

  2. ip addr  

  3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  4.    link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff  

  5.    inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0  

  6.    inet6 fe80::20c:29ff:fee8:900b/64 scope link  

  7.       valid_lft forever preferred_lft forever

3.2 查看ng2上是否绑定了vip

 
 
  1. ip addr  

  2. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  3.    link/ether 00:0c:29:dd:00:77 brd ff:ff:ff:ff:ff:ff  

  4.    inet 192.168.1.23/24 brd 192.168.1.255 scope global eth0  

  5.    inet 192.168.1.18/32 scope global eth0  

  6.    inet6 fe80::20c:29ff:fedd:77/64 scope link  

  7.       valid_lft forever preferred_lft forever

由此可知ip转移正常,keepalived设置成功

四.配置Nginx做反向代理

4.1 修改nginx配置文件

 
 
  1. vi /etc/nginx/nginx.conf  

  2. user  nginx nginx;   ##运行nginx的用户和组

  3. worker_processes  2; ##启动进程数

  4. error_log /var/log/nginx/error.log  notice; ##错误日志记录

  5. pid        /tmp/nginx.pid;                   ##pid存放位置

  6. worker_rlimit_nofile 65535;                  ##线程最大打开文件数,须配合ulimit -SHn使用  

  7. events {  

  8.    use epoll;                 ##工作模型  

  9.    worker_connections  65536; ##单进程最大连接数

  10. }  

  11. http {                        ##http模块      

  12.    include       mime.types;  ##包含进来

  13.    default_type  application/octet-stream; ##默认类型  

  14.    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '  

  15.                      '$status $body_bytes_sent "$http_referer" '  

  16.                      '"$http_user_agent" "$http_x_forwarded_for"';  

  17. ##日志格式

  18.    access_log  /var/logs/nginx/http.access.log  main; ##访问日志

  19.    client_max_body_size 20m;  ##最大请求文件大小

  20.    client_header_buffer_size 16k; ##来自客户端请求header_buffer大小

  21.    large_client_header_buffers 4 16k; ##较大请求缓冲个数与大小

  22.    sendfile       on;                 ##内核空间直接发送到tcp队列  

  23.    tcp_nopush     on;  

  24.    tcp_nodelay    on;  

  25.    keepalive_timeout  65;    ##长连接时长  

  26.    gzip  on;                 ##启用压缩

  27.    gzip_min_length 1k;        ##最小压缩大小

  28.    gzip_buffers 4 16k;        ##压缩缓冲  

  29.    gzip_http_version 1.1;     ##支持协议  

  30.    gzip_comp_level 2;         ##压缩等级  

  31.    gzip_types text/plain application/x-javascript text/css application/xml;      ##压缩类型  

  32.    gzip_vary on;              ##前端缓存服务器可以缓存压缩过的页面

  33.    upstream laoguang.me {     ##用upstream模块定义集群与RS

  34.        server 192.168.1.24:80 max_fails=3fail_timeout=10s;   ##RS的地址,最大错误数与超时时间,超过了自动剔除  

  35.        server 192.168.1.25:80 max_fails=3fail_timeout=10s;  

  36. }  

  37. server {  

  38.        listen       80;           ##监听端口

  39.        server_name  192.168.1.18; ##servername

  40.        root   html;               ##根目录  

  41.        index  index.html index.htm; ##你懂得

  42.        #charset koi8-r;  

  43.        access_log  logs/192.168.1.18.access.log  main;  

  44.  ##这个server的访问日志

  45.        location / {    

  46.                proxy_pass http://laoguang.me;  ##反向代理

  47.                proxy_redirect off;  

  48.                proxy_set_header X-Real-IP $remote_addr;  

  49. ##真实客户ip告诉后端

  50.                proxy_set_header X-Forwarded-For Proxy_add_x_forwarded_for;  

  51.        }  

  52.        location /nginx {  

  53.                access_log off;    

  54.                stub_status on; ##状态页面

  55.        }  

  56.        error_page   500 502 503 504  /50x.html;  

  57. location = /50x.html {  

  58.            root   html;  

  59.        }  

  60.    }  

  61. }

4.2 拷贝到ng2上一份

 
 
  1. scp /etc/nginx/nginx.conf 192.168.1.23:/etc/nginx/

4.3 测试反向代理能否负载均衡

lamp1,lamp2启动httpd

 
 
  1. service httpd start

ng1重启nginx

 
 
  1. service nginx restart  

用RealIp访问测试能否轮询
http://192.168.1.22

同样测试ng2,如果都能实现负载均衡,那么继续

五.测试keepalived与nginx配合运行

现在192.168.1.18在 ng2上,        访问 http://192.168.1.18 测试能否轮询
ng2上 service keepalived stop     访问测试 http://192.168.1.18 能否轮询
关闭lamp1上的service httpd stop   访问测试http://192.168.1.18 是否会报错

到此高可用webserver构建完毕,没有单点故障,任何一点故障不影响业务。

你可能感兴趣的:(nginx,keepalived)