1)、简述HA cluster原理
i). HA cluster的定义
集群cluster是指使用一组计算机作为一个整体向用户提供一组网络资源。
在集群中的每个计算机系统称为集群节点(node)。集群可随着业务的增长,通过添加新的节点的方式来提升集群性能。集群的类型包括:Load Balance、High Availability、High Performance这三种,而我们通常所说的HA cluster就是High availability cluster。
集群类型: LB(lvs/nginx(http/upstream,stream/upstream))、HA、HP
ii). HA cluster 的性能衡量及工作方式
HA cluster 性能公式:
HA=MTBF/(MTBF+MTTR)*100%
MTBF: 平均故障间隔时间
MTTR: 故障的平均恢复时间
其计算值的范围为0-1,计算得到的结果越接近1,说明此HA cluster 就越稳定。
指标: 99%,...,99.999%,99.9999%
99% 意味着一年宕机时间不超过4天;99.9% 意味一年宕机时间不超过10小时;99.99% 意味一年宕机时间不超过1小时;99.999% 意味一年宕机时间不超过6分钟。
iii). HA cluster的工作方式
主备方式
即HA cluster集群中的节点以主备的方式运行,主机处于工作状态,备机处于监控准备状态;当主机出现宕机状态时,备机接管主机的一切工作, 待主机恢复正常后,备机再根据事先设置的设定来决定是否把服务切换到主机上运行。
双主方式
即HA cluster 集群中的节点均已主机方式运行,互相之间同时运行维护各自的服务工作并相互检测。当任意一台主机宕机后,另一台主机会接管它的一切工作,保证服务正常运行。
iii). HA cluster的运行原理
自动侦测(Auto-Detect)阶段 由主机上的软件通过冗余侦测线,经由复杂的监听程序。逻辑判断,来相互侦测对方运行的情况,所检查的项目有:主机硬件(CPU和周边)、主机网络、主机操作系统、数据库引擎及其它应用程序、主机与磁盘阵列连线。为确保侦测的正确性,而防止错误的判断,可设定安全侦测时间,包括侦测时间间隔,侦测次数以调整安全系数,并且由主机的冗余通信连线,将所汇集的讯息记录下来,以供维护参考。
自动切换(Auto-Switch)阶段 某一主机如果确认对方故障,则正常主机除了继续进行原来的任务,还将依据各种容错备援模式接管预先设定的备援作业程序,并进行后续的程序及服务,此类故障切换又被称为failover。
自动恢复(Auto-Recovery)阶段 在正常主机代替故障主机工作后,故障主机可离线进行修复工作。在故障主机修复后,透过冗余通讯线与原正常主机连线,自动切换回修复完成的主机上。整个恢复过程完成由HA相关软件自动完成,亦可依据预先配置,选择恢复动作为半自动或不恢复。而某资源的主节点故障后重新修改上线后,将转移至其它节点的资源重新切回的过程通常称为failback。
2)、keepalived实现主从、主主架构
测试环境:共5台主机
RealServer1: 192.168.10.114/24
RealServer1: 192.168.10.224/24
DirectorServer1: 192.168.10.226/24 VirtualServer: 192.168.10.10/24
DirectorServer2: 192.168.10.228/24 VirtualServer: 192.168.10.10/24
keepalived的主从架构
i). 配置RealServer端环境
[root@rs1 ~]#ntpdate ntp1.aliyun.com
31 Dec 23:50:12 ntpdate[1617]: step time server 120.25.115.20 offset 20.688191 sec
[root@rs1 ~]#systemctl stop firewalld.service
[root@rs1 ~]#systemctl disable firewalld.service
[root@rs1 ~]#getenforce
Disabled
ii). 配置nginx测试主页 (RS1和RS2配置类似)
[root@rs1 ~]#yum install nginx -y
[root@rs1 ~]#vim /usr/share/nginx/html/index.html
192.168.10.114 RS1_Server
192.168.10.224 RS1_Server
[root@rs1 html]#systemctl start nginx.service
[root@rs1 html]#ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:111 *:*
LISTEN 0 128 *:80
iii). 配置lvs-dr模型脚本文件
[root@rs1 html]#vim RS.sh
#!/bin/bash
#
vip=192.168.10.10
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:0
;;
stop)
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
[root@rs1 html]#bash -n RS.sh
[root@rs1 html]#bash -x RS.sh start
[root@rs1 html]#scp RS.sh 192.168.10.224:/root/
iiii). 配置DirectorServer端(DR1和DR2配置类似)
[root@dr1 ~]#ntpdate ntp.aliyun.com
1 Jan 00:35:12 ntpdate[1653]: step time server 203.107.6.88 offset 20.667238 sec
[root@dr1 ~]#systemctl stop firewalld.service
[root@dr1 ~]#systemctl disable firewalld.service
[root@dr1 ~]#getenforce
Disabled
iiiii). 配置keepalived文件
(DR2配置需要做相应IP的调整,包括状态类型BACKUP以及优先级)
[root@dr1 ~]#yum install ipvsadm keepalived -y
[root@dr1 ~]#vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.226
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10/24 dev ens33 Label ens33:0
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr1 ~]#systemctl start keepalived
[root@dr1 ~]#ip addr
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:66:40:a6 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.226/24 brd 192.168.10.255 scope global noprefixroute dynamic ens33
valid_lft 11937sec preferred_lft 11937sec
inet 192.168.10.10/24 scope global secondary ens33
valid_lft forever preferred_lft forever
DR2同样参照上述配置进行设置并启动.
iv).客户端进行测试
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
关闭dr1的keepalived服务,查看dr2状态已经发生改变
[root@dr1 ~]#systemctl stop keepalived
[root@dr2 keepalived]#systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2019-01-01 02:17:37 CST; 8s ago
Process: 49596 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 49597 (keepalived)
Tasks: 3
CGroup: /system.slice/keepalived.service
├─49597 /usr/sbin/keepalived -D
├─49598 /usr/sbin/keepalived -D
└─49599 /usr/sbin/keepalived -D
1月 01 02:17:37 dr2 Keepalived_vrrp[49599]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
1月 01 02:17:42 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Transition to MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Entering MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) setting protocol VIPs.
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
查看服务调度一切正常,说明keepalived主从配置生效,反之亦然
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
keepalived的主主架构
在上述基础主从基础做对应的调整
i). RS方面脚本做对应调整
[root@rs1 html]#cat RS2.sh
#!/bin/bash
#
vip=192.168.10.99
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:1 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:1
;;
stop)
ifconfig lo:1 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
传输给RS2主机,并都启用脚本
[root@rs1 html]#scp RS2.sh 192.168.10.224:/root/
[root@rs1 html]#bash -n RS2.sh
[root@rs1 html]#bash -x RS2.sh start
ii). DR方面对conf文件添加对应的主备参数
DR1的配置文件:
[root@dr1 ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.226
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 2
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 572f97b2
}
virtual_ipaddress {
192.168.10.99
}
}
virtual_server 192.168.10.99 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr1 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.10.10:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
TCP 192.168.10.99:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
DR2的配置文件:
[root@dr2 ~]#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.228
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 1
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
vrrp_instance VI_2 {
state MASTER
interface ens33
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 572f97b2
}
virtual_ipaddress {
192.168.10.99
}
}
virtual_server 192.168.10.99 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr2 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.10.10:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
TCP 192.168.10.99:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
iii). 客户机测试
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.99/index.html; done
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
192.168.10.224 RS2_Server
192.168.10.114 RS1_Server
3)、简述http协议缓存原理及常用首部讲解
程序的运行具有局部性的特征
时间局部性: 一个数据被访问过之后,可能很快会被再次访问到
空间局部性: 一个数据被访问时,其周边的数据也有可能被访问到
cache: 命中
热区: 局部性
- 时效性:
- 缓存空间耗尽: LRU,最近最少使用算法
- 过期: 缓存清理
缓存命中率: hit/(hit+miss)
- (0,1)
- 页面命中率: 基于页面数量进行衡量
- 字节命中率: 基于页面的体积进行衡量
缓存与否:
- 私有数据: private, private cache
- 公共数据: public, public or private cache
http协议缓存的原理
基于nginx的反代服务时,为了加速性能,可以开启nginx缓存;如果这nginx为负载均衡器时,还要承担缓存的功能,在高并发下,会面临带宽瓶颈;因此在规模交大时,会在反代服务器后面添加专门用于缓存的服务器,来提供缓存功能。这样让代理功能的服务器只负责代理,让缓存功能的服务器只负责缓存,当前端主机请求资源时,它所指向的上游服务器就不在是真正的服务器,而是缓存服务器,他们之间是通过http请求和http响应报文来通信;因此,代理服务器取资源时缓存服务器如果本地未能命中,会到后端服务器读取数据,取到数据后按照缓存策略是否可缓存,如果可缓存就把数据缓存到本地,并响应给前端主机;如果缓存服务器能命中,则缓存服务器直接响应,省去了到后端读取数据的过程
常用首部
- Cache-related Header Fields
- The most important caching header fields are;
- Expire: 过期时间
- Expries: Thu, 22 Oct 2026 06:34:30 GMT
- Cache-Control: max-age=
- Etag
- If-None-Match
- Last-Modified
- If-Modified-Since
- Vary
- Age
- 缓存有效性判断机制:
- 过期时间: Expires
- HTTP/1.0
- Expires
- HTTP/1.1
- Cache-Control: maxage=
- Cache-Control: s-maxage=
- 条件式请求:
- Last-Modified/If-Modified-Since
- Etag/If-None-Match
- Expires: Thu, 13 Aug 2026 02:05:12 GMT
- Cache-Control: maxage=315360000
- Etag:"1ec5-502264e2ae4c0"
- Last-Modified: Wed, 03 Sep 2014 10:00:27 GMT
4)、简述回源原理和CDN常见多级缓存
一、CDN回源
回源原理
i). 源站内容有更新的时候,源站主动把内容推送到CDN节点。
ii). 常规的CDN都是回源的。即:当有用户访问某一个URL的时候,如果被解析到的那个CDN节点没有缓存响应的内容,或者是缓存已经到期,就会回源站去获取。如果没有人访问,那么CDN节点不会主动去源站拿的。
iii). 回源域名一般是cdn领域的专业术语,通常情况下,是直接用ip进行回源的,但是如果客户源站有多个ip,并且ip地址会经常变化,对于cdn厂商来说,为了避免经常更改配置(回源ip),会采用回源域名方式进行回源,这样即使源站的ip变化了,也不影响原有的配置。
二、CDN常见多级缓存
1、CDN概念
- CDN的全称是Content Delivery Network,即内容分发网络。其基本思路是尽可能避开互联网上有可能影响数据传输速度和稳定性的瓶颈和环节,使内容传输的更快、更稳定。通过在网络各处放置节点服务器所构成的在现有的互联网基础之上的一层智能虚拟网络,CDN系统能够实时地根据网络流量和各节点的连接、负载状况以及到用户的距离和响应时间等综合信息将用户的请求重新导向离用户最近的服务节点上。其目的是使用户可就近取得所需内容,解决 Internet网络拥挤的状况,提高用户访问网站的响应速度。
2、CDN工作方法
- 客户端浏览器先检查是否有本地缓存是否过期,如果过期,则向CDN边缘节点发起请求,CDN边缘节点会检测用户请求数据的缓存是否过期,如果没有过期,则直接响应用户请求,此时一个完成http请求结束;如果数据已经过期,那么CDN还需要向源站发出回源请求(back to the source request),来拉取最新的数据。CDN的典型拓扑图如下:
3、CDN缓存
浏览器本地缓存失效后,浏览器会向CDN边缘节点发起请求。类似浏览器缓存,CDN边缘节点也存在着一套缓存机制。
4、CDN缓存的缺点
CDN的分流作用不仅减少了用户的访问延时,也减少的源站的负载。但其缺点也很明显:当网站更新时,如果CDN节点上数据没有及时更新,即便用户再浏览器使用Ctrl +F5的方式使浏览器端的缓存失效,也会因为CDN边缘节点没有同步最新数据而导致用户访问异常。
5、CDN缓存策略
CDN边缘节点缓存策略因服务商不同而不同,但一般都会遵循http标准协议,通过http响应头中的Cache-control: max-age的字段来设置CDN边缘节点数据缓存时间。
当客户端向CDN节点请求数据时,CDN节点会判断缓存数据是否过期,若缓存数据并没有过期,则直接将缓存数据返回给客户端;否则,CDN节点就会向源站发出回源请求,从源站拉取最新数据,更新本地缓存,并将最新数据返回给客户端。
CDN服务商一般会提供基于文件后缀、目录多个维度来指定CDN缓存时间,为用户提供更精细化的缓存管理。
CDN缓存时间会对“回源率”产生直接的影响。若CDN缓存时间较短,CDN边缘节点上的数据会经常失效,导致频繁回源,增加了源站的负载,同时也增大的访问延时;若CDN缓存时间太长,会带来数据更新时间慢的问题。开发者需要增对特定的业务,来做特定的数据缓存时间管理。
6、CDN缓存刷新
CDN边缘节点对开发者是透明的,相比于浏览器Ctrl+F5的强制刷新来使浏览器本地缓存失效,开发者可以通过CDN服务商提供的“刷新缓存”接口来达到清理CDN边缘节点缓存的目的。这样开发者在更新数据后,可以使用“刷新缓存”功能来强制CDN节点上的数据缓存过期,保证客户端在访问时,拉取到最新的数据。
5)、varnish实现缓存对象及反代后端主机
请求报文用于通知缓存服务如何使用缓存响应请求:
cache-request-directive =
"no-cache"
"no-store"
"max-age" "=" delta-seconds
"max-stale" [ "=" delta-seconds ]
"min-fresh" "=" delta-seconds
"no-transform"
"only-if-cached"
cache-extension
响应报文用于通知缓存服务器如何存储上级服务器响应的内容
cache-response-directive =
"public"
"public" [ "=" <"> 1#field-name <">]
"no-cache" [ "=" <"> 1#field-name <">],可缓存,但响应给客户端之前需要revalidation
"no-store", 不允许存储响应内容于缓存中
"no-transform"
"must-revalidate"
"proxy-revalidate"
"max-age" "=" delta-seconds
"s-maxage" "=" delta-seconds
cache-extension
开源解决方案:
- squid:
- varnish:
- varnish官方站点: https://varnish-cache.org/
- Community
- Enterprise
- 程序架构:
- Manager进程
- Cache进程,包含多种类型的线程
- accept, worker, expiry...
- shared memory log:
- 统计数据: 计数器
- 日志区域: 日志记录
- varnishlog, varnishncsa, varnishstat....
- 配置接口: VCL
- varnish Configuration Language
- vcl complier --> c complier --> shared object
- varnish Configuration Language
- varnish的程序环境:
- /etc/varnish/varnish.params: 配置varnish服务进程的工作特性,例如监听的地址和端口,缓存机制
- /etc/varnish/default.vcl: 配置各Child/Cache进程的缓存工作属性
- 主程序:
- /usr/sbin/varnishd
- CLI interface:
- /usr/bin/varnishadm
- Shared Memory Log交互工具:
- /usr/bin/varnishhist
- /usr/bin/varnishlog
- /usr/bin/varnishncsa
- /usr/bin/varnishstat
- /usr/bin/varnishtop
- 测试工具程序
- /usr/bin/varnishtest
- VCL配置文件重载程序
- /usr/sbin/varnish_reload_vcl
- Systemd Unit File:
- /usr/lib/systemd/system/varnish.service
- varnish服务
- /usr/lib/systemd/system/varnishlog.service
- /usr/lib/systemd/system/varnishncsa.service
- 日志持久服务
- /usr/lib/systemd/system/varnish.service
- varnish的缓存存储机制(Storage Types)
- -S [name=]type[,options]
- malloc[,size]
- 内存存储,[,size]用于定义空间大小;重启后所有缓存项失效
- file[,path[,size[,granularity]]]
- 磁盘文件存储,黑盒; 重启后所有缓存项失效
- persistent,path,size
- 文件存储,黑盒; 重启后所有缓存项有效; 实验阶段
- varnish程序的选项:
- 程序选项: /etc/varnish/varnish.params文件
- -a address[:port][,address[:port]],默认为6081端口
- -T address[:port],默认为6082端口
- -s [name=]type[,options],定义缓存存储机制
- -u user
- -g group
- -f config: VCL配置文件
- -F: 运行于前台
- ....
- 运行时参数: /etc/varnish/varnish.params文件, DEAMON_OPTS
- DAEMON_OPTS="-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300"
- -p param=value: 设定运行参数及其值;可重复使用多次
- -r param[,param...]: 设定指定的参数为只读状态
- 程序选项: /etc/varnish/varnish.params文件
- 重载vcl配置文件:
~]# varnish_reload_vcl
- varnishadm
-S /etc/varnish/secret -T [ADDRESS:]PORT
help []
ping []
auth
quit
banner
status
start
stop
vcl.load
vcl.inline
vcl.use
vcl.discard
vcl.list
param.show [-i] []
param.set
panic.show
panic.clear
storage.list
vcl.show [-v]
backend.list []
backend.set_health
ban [&& ]...
ban.list
- 配置文件相关:
- vcl.list
- vcl.load: 装载, 加载并编译
- vcl.use: 激活
- vcl.discard: 删除
- vcl.show [-v] <\configname>: 查看指定的配置文件的详细信息
- 运行时参数:
- param.show -l: 显示列表
- param.show <\PARAM>
- param.set <\PARAM> <\VALUE>
- 缓存存储:
- storage.list
- 后端服务器:
- backend.list
VCL:
- "域"专有类型的配置语言
- state engine: 状态引擎
- VCL有多个状态引擎,状态之间存在相关性,但状态引擎彼此间互相隔离; 每个状态引擎可使用return(x)指明关联至哪个下一级引擎;每个状态引擎对应于vcl文件中的一个配置段,即为subroutine
- vcl_hash --> return(hit) --> vcl_hit
- vcl_recv的默认配置:
sub vcl_recv {
if (req.method == "PRI") {
/* We do not support SPDY or HTTP/2.0 */
return (synth(405));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.method != "GET" && req.method != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
return (hash);
}
- Client Side:
- vcl_recv, vcl_pass, vcl_hit, vcl_miss, vcl_pipe, vcl_purge, vcl_synth, vcl_deliver
- vcl_recv:
- hash: vcl_hash
- pass: vcl_pass
- pipe: vcl_pipe
- synth: vcl_synth
- purge: vcl_hash --> vcl_purge
- vcl_hash:
- lookup:
- hit: vcl_hit
- miss: vcl_miss
- pass, hit_for_pass: vcl_pass
- purge: vcl_purge
- lookup:
- Backend Side:
- vcl_backend_fetch, vcl_backend_response, vcl_backend_error
- 两个特殊的引擎:
- vcl_inti: 在处理任何请求之前要执行的vcl代码: 主要用于初始化VMODs;
- vcl_fini: 所有的请求都已经结束,在vcl配置被丢弃时调用; 主要用于清理VMODs;
vcl的语法格式:
- (1) VCL files start with vcl 4.0
- (2) //,# and /* foo */ for comments
- (3) Subroutines are declared with the sub keyword; 例如sub vcl_recv{...}
- (4) No loops, state-limited variables(受限于引擎的内建变量)
- (5) Terminating statements with a keyword for next action as argument of the return() function, i.e:return(action); 用于实现状态引擎转换
- (6) Domain-specific
The VCL Finite State Machine
- (1) Each request is processed separately
- (2) Each request is independent from others at any given time
- (3) States are related, but isolated
- (4) return(action); exits one state and instructs Varnish to proceed to the next state
- (5) Built-in VCL code is always present and appended below your own VCL
三类主要语法
sub subroutine {
...
}
if CONDITION {
...
}else{
...
}
return(),hash_data()
VCL Built-in Functions and Keywords
- 函数:
- regsub(str,regex,sub)
- regsuball(str,regex,sub)
- ban(boolean expression)
- hash_data(input)
- synthetic(str)
- keywords:
- call subroutine, return(action), new, set, unset
操作符:
- ==, !=, ~, >, >=, <, <=
- 逻辑操作符: &&, ||, !
- 变量赋值: =
- 举例: obj,hits
if(obj.hits>0) {
set resp.http.X-Cache = "HIT via" + server.ip;
}else{
set resp.http.X-Cache = "MISS via" + server.ip;
}
变量类型:
内建变量:
req.*: request, 表示由客户端发来的请求报文相关;
req.http.*
req.http.User-Agent, req.http.Referer,...
bereq.*: 由varnish发往BE主机的httpd请求先关
bereq.http.*
beresp.*: 由BE主机响应给varnish的响应报文相关
beresp.http.*
resp.*: 由varnish响应给client相关
obj.*: 存储在缓存空间中的缓存兑现的属性; 只读;
常用变量:
bereq.*,req.*:
bereq.http.HEADERS
bereq.request: 请求方法
bereq.url: 请求的url
bereq.proto: 请求的协议版本
bereq.backend: 指明要调用的后端主机
req.http.Cookie: 客户端的请求报文中Cookie首部的值
req.http.User-Agent ~ "chrome"
beresp.*.resp.*:
beresp.http.HEADERS
beresp.status: 响应的状态码
beresp.proto: 协议版本
beresp.backend.name: BE主机的主机名
beresp.ttl: BE主机响应的内容的余下的可缓存时长
obj.*
obj.hits: 此对象从缓存中命中的次数
obj.ttl: 对象的ttl值
server.*
server.ip
server.hostname
client.*
client.ip
用户自定义
- set
- unset
示例1: 强制对某类资源的请求不检查缓存
vcl_recv {
if(req.url ~ "(?i)^/(login|admin)") {
return(pass);
}
}
示例2: 对于特定类型的资源,例如公开的图片等,取消其私有标识,并强行设定其可以由varnish缓存的时长
if(beresp.http.cache-control !~ "s-maxage") {
if(bereq.url ~ "(?i)\.(jpg|jpeg|png|gif|css|js)$") {
unset beresp.http.Set-Cookie;
set beresp.ttl=3600s;
}
}
示例3:
if(req.restarts == 0) {
if(req.http.X-Fowarded-For) {
set.req.http.X-Forwarded-For = req.http.X-forwarded-For + "," + client.ip;
}else {
set.req.http.X-Forwarded-For = client.ip;
}
}
缓存对象的修剪: purge, ban
- (1) 能执行purge操作
sub vcl_purge {
return(synth(200,"Purged"));
}
- (2) 何时执行purge操作
sub vcl_recv {
if(req.method == "PURGE") {
return(purge);
}
...
}
- 添加此请求的访问控制法则:
acl purgers {
"127.0.0.1";
"192.168.0.0"/24;
}
sub vcl_recv {
# allow PURGE from localhost and 192.168.0...
if (req.method == "PURGE") {
if (!client.ip ~ purgers) {
return (synth(405, "Purging not allowed for " + client.ip));
}
return (purge);
}
}
sub vcl_purge {
set req.method = "GET";
return (restart);
}
Banning
- (1)varnishadm:
ban
示例:
ban req.url ~ ^/javascripts
- (2)在配置文件中定义,使用ban()函数
示例:
if (req.method == "BAN") {
ban("req.http.host == " + req.http.host + " && req.url == " + req.url);
# Throw a synthetic page so the request won't go to the backend.
return(synth(200, "Ban added"));
}
如何设定使用多个后端主机
backend default {
.host = "172.16.100.6";
.port = "80";
}
backend appsrv {
.host = "172.16.100.7";
.port = "80";
}
sub req.recv {
if(req.url ~ "(?i)\.php$") {
set req.backend_hint = appsrv;
}else {
set req.backend_hint = default;
}
...
}
Director
- varnish module
- 使用前需要导入
- import directors;
示例:
- import directors;
- 使用前需要导入
backend server1 {
.host =
.port =
}
backend server2 {
.host =
.port =
}
sub vcl_init {
new GROUP_NAME = directors.round.robin();
GROUP_NAME.add_backend(server1);
GROUP_NAME.add_backend(server2);
}
sub vcl_recv {
set req.backend_hint = GROUP_NAME.backend();
}
基于cookie的session sticky
sub vcl_init {
new h = directors.hash();
h.add_backend(one, 1); // backend 'one' with weight '1'
h.add_backend(two, 1); // backend 'two' with weight '1'
}
sub vcl_recv {
// pick a backend based on the cookie header of the client
set req.backend_hint = h.backend(req.http.cookie);
}
BE Health Check
backend BE_NAME {
.host =
.probe =
.url =
.timeout =
.interval =
.window =
.threshold =
}
}
- .probe: 定义健康状态检测方法
- .url: 检测时请求的URL,默认为"/"
- .request: 发出的具体请求
- .request =
- "GET /.healthtest.html HTTP/1.1"
- "Host:www.magedu.com"
- "Connection:close"
- .request =
- .windows: 基于最近的多少次检查来判断其健康状态
- .threshhold: 最近.window中定义的这么次检查中只有.threshhold定义的次数是成功的
- .interval: 检查频度
- .timeout: 超时时长
- .expected_response: 期望的响应码,默认为200
健康状态检测的配置方式:
- (1) probe PB_NAME = {}
backend NAME = {
.probe = PB_NAME;
...
}
- (2) backend NAME {}
backend NAME = {
.probe = {
...
}
}
示例:
probe check {
.url = "/healthcheck.html";
.timeout = 1s;
.interval = 2s;
.window = 5;
.threshold = 4;
}
backend default {
.host = "10.1.0.68";
.port = "80";
.probe = check;
}
backend appsrv {
.host = "10.1.0.69";
.port = "80";
.probe = check;
}
[图片上传失败...(image-1f6caf-1546287959236)]
设置后端的主机属性
backend BE_NAME {
...
.connect_timeout = 0.5S;
.first_byte_timeout = 20S;
.between_bytes_timeout = 5S;
.max_connections = 50;
}
varnish的运行时参数:
- 线程模型:
- cache-worker
- cache-main
- ban lurker
- acceptor
- epoll/kqueue
- ...
线程相关的参数:
在线程池内部,其每一个请求由一个线程来处理,其worker线程的最大数决定了varnish的并发响应能力
- thread_pools: Number of worker thread pools. 最好小于或等于CPU核心数量
- thread_pool_max: Maximum number of worker threads per pool. 每线程池的最大线程数
- thread_pool_min: Minimum number of worker threads per pool. 额外意义为"最大空闲线程数"
- 最大并发连接数=thread_poos * thread_pool_max
- thread_pool_timeout: Period of time before idle threads are destroyed.
- thread_pool_add_delay: Period of time to wait for subsequent thread creation.
- thread_pool_destroy_delay: Added time to thread_pool_timeout.
Timer相关的参数:
- send_timeout:
- timeout_idle:
- timeout_req
- 设置方式:
- vcl.param
- param.set
- 永久有效的方法:
- varnish.params
- DEAMON_OPTS="-p PARAM1=VALUE -p PARAM2=VALUE"
- varnish.params
varnish日志区域
-
shared memory log
- 计数器
- 日志信息
-
varnishstat - Varnish Cache statistics
- -1
- -1 -f FILED_NAME
- -l: 可用于-f选项指定的字段名称列表
- MAIN.cache_hit
- MAIN.cache_miss
### varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss
### varnishstat -l -f MAIN -f MEMPOOL
- varnishtop -Varnish log entry ranking
- -1
- -l taglist,可以同时使用多个-l选项,也可以一个选项跟上多个标签
- -I <[taglist:]regex>
- -x taglist: 排除列表
- -X <[taglist:]regex>
- varnishlog - Display Varnish logs
- varnishncsa - Display Varnish logs in Apache/NCSA combined log format