1、Nginx+Keepalived实现站点高可用
Keepalived软件起初是专为LVS负载均衡软件设计的,用来管理并监控LVS集群系统中各个服务节点的状态,后来又加入了可以实现高可用的VRRP功能。因此,keepalived除了能够管理LVS软件外,还可以作为其他服务的高可用解决方案软件。
keepalived软件主要是通过VRRP协议实现高可用功能的。VRRP是Virtual Router Redundancy Protocol(虚拟路由冗余协议)的缩写,VRRP出现的目的就是为了解决静态路由的单点故障问题的,它能保证当个别节点宕机时,整个网络可以不间断地运行。所以,keepalived一方面具有配置管理LVS的功能,同时还具有对LVS下面节点进行健康检查的功能,另一方面也可以实现系统网络服务的高可用功能。
Keepalived高可用服务对之间的故障切换转移,是通过VRRP来实现的。在keepalived服务工作时,主Master节点会不断地向备节点发送(多播的方式)心跳消息,用来告诉备Backup节点自己还活着。当主节点发生故障时,就无法发送心跳的消息了,备节点也因此无法继续检测到来自主节点的心跳了。于是就会调用自身的接管程序,接管主节点的IP资源和服务。当主节点恢复时,备节点又会释放主节点故障时自身接管的IP资源和服务,恢复到原来的备用角色。
准备两台主机安装Keepalived,Nginx,一台为主节点一台为备节点,每个节点为单网卡。网页目录挂载为共享存储。
yum install keepalived nginx epel-release -y
配置默认站点主页,测试效果
配置Keepalived vim /etc/Keepalived/keepalived.conf
专用参数:
state MASTER|BACKUP:当前节点在此虚拟路由器上的初始状态;只能有一个是MASTER,余下的都应该为BACKUP;
interface IFACE_NAME:绑定为当前虚拟路由器使用的物理接口;
virtual_router_id VRID:当前虚拟路由器的惟一标识,范围是0-255;
priority 100:当前主机在此虚拟路径器中的优先级;范围1-254;
advert_int 1:vrrp通告的时间间隔;
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2
vrrp_mcast_group4 224.0.100.20
}
vrrp_script chk_down {
script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"
interval 1
weight -5
}
vrrp_script chk_nginx {
script "killall -0 nginx && exit 0 || exit 1"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 14
priority 96
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
track_script {
chk_down
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
2、实现keepalived主主模型
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.91/16 dev eno16777736
}
}
vrrp_instance VI_2 {
state BACKUP
interface eno16777736
virtual_router_id 15
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass 578f07b2
}
virtual_ipaddress {
10.1.0.92/16 dev eno16777736
}
}
另一个节点替换state 和修改权重值router_id值。
3、采用varnish为nginx实现缓存加速
Name : varnish
Arch : x86_64
Version : 4.0.5
Release : 1.el7
Size : 1.2 M
Repo : installed
From repo : epel
Summary : High-performance HTTP accelerator
URL : http://www.varnish-cache.org/
License : BSD
Description : This is Varnish Cache, a high-performance HTTP accelerator.
配置epel仓库后安装,yum install varnish -y
/etc/varnish/varnish.params: 配置varnish服务进程的工作特性,例如监听的地址和端口,RELOAD_VCL=1 VARNISH_VCL_CONF的策略文件地址 VARNISH_LISTEN_PORT监听的端口 管理地址和端口 VARNISH_ADMIN_LISTEN_ADDRESS VARNISH_ADMIN_LISTEN_PORT VARNISH_STORAGE 缓存机制(内存缓存还是磁盘缓存)
配置接口:VCL :Varnish Configuration Language,
vcl complier --> c complier --> shared object 每编译一次生成一个新的版本
主程序:/usr/sbin/varnishd
CLI interface:/usr/bin/varnishadm
Shared Memory Log交互工具:
/usr/bin/varnishhist
/usr/bin/varnishlog
/usr/bin/varnishncsa
/usr/bin/varnishstat
/usr/bin/varnishtop
测试工具程序:
/usr/bin/varnishtest
VCL配置文件重载程序:
/usr/sbin/varnish_reload_vcl
Systemd Unit File:
/usr/lib/systemd/system/varnish.service
varnish服务
/usr/lib/systemd/system/varnishlog.service
/usr/lib/systemd/system/varnishncsa.service
日志持久的服务,把共享内存中的日志信息保存到持久化的存储上,一遍统计分析。
配置
vim /etc/varnish/varnish.params 修改一下参数
VARNISH_LISTEN_PORT=80
VARNISH_STORAGE="file,/data/varnish/cache,1g"
创建缓存文件路径
mkdir /data/varnish/cache -pv
chown -R varnish.varnish /data/varnish/cache
编辑vcl文件 vim /etc/varnish/default.vcl 定义后端主机
backend default {
.host = "192.168.10.11";
.port = "80";
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "cached";
} else {
set resp.http.x-Cache = "uncached";
}}
启动varnish服务
# systemctl start varnish.service
# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:80 *:*
编译vcl
# varnish_reload_vcl
Loading vcl from /etc/varnish/default.vcl
Current running config name is
Using new config name reload_2019-06-30T10:21:32
VCL compiled.
VCL 'reload_2019-06-30T10:21:32' now active
available 0 boot
active 0 reload_2019-06-30T10:21:32
Done
指定新的策略版本
# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082
vcl.use reload_2019-06-30T10:21:32
200
测试
Via: 1.1 varnish-v4
X-Cache: cached
4、LNMP结合varnish实现动静分离
前端两台主机都配置Keepalived和varnish,实现高可用,Keepalived使用主主模型。
vcl定义不同后端主机资源类型并健康检查(后端可用nginx代理,实现再扩展负载均衡,第三级使用httpd+php), 数据库使用共享存储。
probe www_probe {
.url = "/index.html";
.internal = 1s;
.timeout = 1s;
.window = 8;
.threshold = 5;
}
backend imgsrv1 {
.host = "192.168.10.11";
.port = "80";
.probe = www_probe;
}
backend imgsrv2 {
.host = "192.168.10.12";
.port = "80";
.probe = www_probe;
}
backend appsrv1 {
.host = "192.168.10.21";
.port = "80";
.probe = www_probe;
}
backend appsrv2 {
.host = "192.168.10.22";
.port = "80";
.probe = www_probe;
}
sub vcl_init {
new imgsrvs = directors.random();
imgsrvs.add_backend(imgsrv1,10);
imgsrvs.add_backend(imgsrv2,20);
new staticsrvs = directors.round_robin();
appsrvs.add_backend(appsrv1);
appsrvs.add_backend(appsrv2);
new appsrvs = directors.hash();
appsrvs.add_backend(appsrv1,1);
appsrvs.add_backend(appsrv2,1);
}
sub vcl_recv {
if (req.url ~ "(?i)\.(css|js|htm|html)$" {
set req.backend_hint = staticsrvs.backend();
}
if (req.url ~ "(?i)\.(jpg|jpeg|png|gif)$" {
set req.backend_hint = imgsrvs.backend();
} else {
set req.backend_hint = appsrvs.backend(req.http.cookie);
}
}