负载均衡之七层负载均衡nginx实现动静分离

一,集群的分类:

高可用集群 HA high availability
避免单节点故障
软件:keepalived

负载均衡集群 LB load balance

提高负载,提高并发量
软件:nginx反向代理 lvs
硬件负载均衡器 F5(BigIP)和redware

HPC高性能运算集群
分布式存储集群
极大的提升存储容量,提供数据高可用,保证数据安全
软件:ceph


Nginx proxy 是 Nginx 的王牌功能,利用 proxy 基本可以实现一个完整的 7 层负载均。
1. 功能强大,性能卓越,运行稳定。
2. 配置简单灵活。
3. 能够自动剔除工作不正常的后端服务器。
4. 上传文件使用异步模式。
5. 支持多种分配策略,可以分配权重,分配方式灵活。


二,实验的过程(七层负载均衡)

1.实验环境:三台机器

192.168.78.131 #安装apache,php
192.168.78.136 #安装apache,php
192.168.78.143 #需要源码安装的nginx ,作为代理服务器负责负载均衡,还需要在安装之前创建用户nginx -s /sbin/nologin 指定它不能登陆

2.开启被代理机器的服务及配置:

对机器192.168.78.131:
systemctl start httpd
# cd /var/www/html
# rm -rf index.html #删除它默认的主页
#vim index.html
192.168.78.131 #保存退出
准备一张图片名字叫做pic.jpg放在/var/www/html里面
#vim test.php
192.168.78.131-php
phpinfo();
?>
此时在/var/www/html里面就有index.html,test.php,pic.jpg三个文件
#systemctl restart httpd #重启服务
对机器192.168.78.136:
systemctl start httpd
# cd /var/www/html
# rm -rf index.html #删除它默认的主页
#vim index.html
192.168.78.136 #保存退出
准备一张图片名字叫做pic.jpg放在/var/www/html里面 和上面的图片不一样,只是名字一样
#vim test.php
192.168.78.136-php
phpinfo();
?>
此时在/var/www/html里面就有index.html,test.php,pic.jpg三个文件
#systemctl restart httpd #重启服务

3.配置nginx服务器192.168.78.143

假如我的nginx安装在/usr/local/目录里面

配置如下:
#vim /usr/local/nginx/conf/nginx.conf
#user nobody;
user nginx; #把用户改为nginx

location / { # location是放在server{}里面的
root html;
index index.html index.htm;

if ($request_uri ~* \.html$){
proxy_pass http://htmlservers;
}

if ($request_uri ~* \.php$){
proxy_pass http://phpservers;
}

proxy_pass http://picservers;
}


upstream htmlservers { #这是负载均衡的服务器,注意这一块要放在http{}里面和server{}的外面
server 192.168.78.131:80;
server 192.168.78.136:80;
}
upstream phpservers {
server 192.168.78.131:80;
server 192.168.78.136:80;
}
upstream picservers {
server 192.168.78.131:80;
server 192.168.78.136:80;
}

#/usr/local/nginx/sbin/nginx -s reload #重新加载配置文件并启动

4.测试:

在其他任意机器上或者192.168.78.143机器上面
在网址栏里面输入http://192.168.78.143/index.html
不断的刷新,界面会轮流显示 192.168.78.131和192.168.78.136
在网址栏里面输入http://192.168.78.143/test.php
不断的刷新,界面会轮流显示 192.168.78.131-php(包括php的界面)和192.168.78.136 -php(包含php的界面)
在网址栏里面输入http://192.168.78.143/pic.jpg
不断的刷新,界面会轮流显示 192.168.78.131里面的图片pic.jpg和192.168.78.136里面的图片pic.jpg,图片只是名字相同

测试性能:
ab -n 2000 -c 2000 http://192.168.78.143/index.html #2000个请求,同时2000个并发
会报错,因为进程的默认最大连接数是1024

ulimit -a #显示目前资源限的设置
ulimit -u #可以看到系统默认一个进程最多同时允许打开1024个文件
ulimit -u '数值' #可以设置最大的连接数

总结,扩展:

如果有tomcat,apache,squid 设置为如下:

#vim /usr/local/nginx/conf/nginx.conf # 在最后添加以下内容,定义服务器组

upstream tomcat_servers{
server 192.168.78.131:8080;
server 192.168.78.136:8080;
server 192.168.78.135:8080;
}
upstream apache_servers{
server 192.168.78.131:80;
server 192.168.78.136:80;
server 192.168.78.135:80;
}
upstream squid_servers{
server 192.168.78.131:3128;
server 192.168.78.136:3128;
server 192.168.78.135:3128;
}

三,加入keepalived实现调度(如下面例子)

拓扑

++++++++++++
+ Client + 192.168.122.1/24 (真实机做客户端)
++++++++++++
|
|
++++++++++++ 192.168.122.254/24
+ Nginx +
++++++++++++
|
__________________________|____________________________
________________|________________ ___________|__________________
| | | |
++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++
+ HTML A + + HTML B + + PHP A + + PHP B +
++++++++++++ ++++++++++++ ++++++++++++ ++++++++++++
eth0 192.168.122.10/24 eth0 192.168.122.20/24 eth0 192.168.122.30/24 eth0 192.168.122.40/24

nginx dr director
|
--------------------
| |
nginx nginx rs realserver


给realserver安装nginx,作页面,启动服务,保证能正常访问

HTML A & HTML B
[root@localhost ~]# yum install httpd
分别创建测试页面 index.html ,开启服务

PHP A & php B
[root@localhost ~]# yum install httpd
分别创建测试页面 index.php ,开启服务


安装配置Nginx
[root@localhost ~]# rpm -ivh nginx-0.6.36-1.el5.i386.rpm
[root@localhost ~]# vim /etc/nginx/nginx.conf
location / {
root /usr/share/nginx/html;
index index.html index.htm;
if ($request_uri ~* \.html$) {
proxy_pass http://htmlserver;
}

if ($request_uri ~* \.php$) {
proxy_pass http://phpserver;
}
}

[root@localhost ~]# vim /etc/nginx/conf.d/test.conf
upstream htmlserver {
server 192.168.122.10;
server 192.168.122.20;
}
upstream phpserver {
server 192.168.122.30;
server 192.168.122.40;
}

[root@localhost ~]# service nginx start


在客户端访问 Nginx 测试
[root@localhost ~]# elinks –dump http:// 192.168.122.254
[root@localhost ~]# elinks –dump http:// 192.168.122.254/index.html
[root@localhost ~]# elinks –dump http:// 192.168.122.254/index.php

===============================

upstream支持的负载均衡算法(面试题)
轮询(默认): 可以通过weight指定轮询的权重,权重越大,被调度的次数越多 rr round robin
权重:用数字 谁数字大谁权重就高 按比例 1 3
rr
wrr
ip_hash 根据每个请求IP进行调度,可以解决session的问题,不能使用weight
client_ip 192.168.1.8 nginx反向 webserver1
fair: 可以根据请求页面的大小和加载时间长短进行调度,使用第三方的upstream_fair模块
url_hash: 按请求的url的hash进行调度,从而使每个url定向到同一服务器,使用第三方的hash模块

upstream支持的状态参数
down: 暂停对该服务器的调度
backup: 类似于LVS Sorry Server,当所有的非backup的服务器故障
max_fails: 请求失败的次数,默认为1
fail_timeout: 在经历max_fails次失败后,暂停服务的时间

upstream tianyun.com {
# ip_hash;
server 192.168.10.137 weight=1 max_fails=2 fail_timeout=2;
server 192.168.10.20 weight=2 max_fails=2 fail_timeout=2;
server 192.168.10.251 max_fails=2 fail_timeout=5 down;
server 192.168.10.253 backup;
}

注:当使用ip_hash时,服务器状态不可使用weightbackup
=================================================================================
Nginx实现七层的负载均衡

调度到同一组上游服务器
=================================================================================

拓扑结构

[LB Nginx]
192.168.1.2

[httpd] [httpd] [httpd]
192.168.1.3 192.168.1.4 192.168.1.5


实施过程
1. nginx
http {
upstream httpservers {
server 192.168.1.3
:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.4:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.5:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.100:80 backup;
}

location / {
proxy_pass
http://httpservers;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
}
}

proxy_next_upstream:这个指令属于 http_proxy 模块的,指定后端返回什么样的异常响应时,使用另一个realserver

2. Apache LogFormat 可选
LogFormat "%{X-Real-IP}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent

=================================================================================

Nginx实现七层的负载均衡

调度到不同组上游服务器
1. 动静分离
2. 网站进行分区

=================================================================================

拓扑结构

[vip: 192.168.1.80]

[LB1 Nginx] [LB2 Nginx]
192.168.1.2 192.168.1.3

[news] [milis] [videos] [images] [others]
1.11 1.21 1.31 1.41 1.51
1.12 1.22 1.32 1.42 1.52
1.13 1.23 1.33 1.43 1.53
... ... ... ... ...


一、实施过程
1. 根据站点分区进行调度
http {
upstream news {
server 192.168.1.11:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.12:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.13:80 weight=2 max_fails=2 fail_timeout=2;
}

upstream milis {
server 192.168.1.21:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.22:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.23:80 weight=2 max_fails=2 fail_timeout=2;
}

upstream videos {
server 192.168.1.31:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.32:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.33:80 weight=2 max_fails=2 fail_timeout=2;
}

upstream images {
server 192.168.1.41:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.42:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.43:80 weight=2 max_fails=2 fail_timeout=2;
}

upstream others {
server 192.168.1.51:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.52:80 weight=2 max_fails=2 fail_timeout=2;
server 192.168.1.53:80 weight=2 max_fails=2 fail_timeout=2;
}

server {
location / {
proxy_pass http://others;
}

location /news {
proxy_pass http://news;
}

location /mili {
proxy_pass http://milis;
}

location ~* \.(wmv|mp4|rmvb)$ {
proxy_pass http://videos;
}

location ~* \.(png|gif|jpg)$ {
proxy_pass http://images;
}
}


2. 根据动静分离进行调度
http {
upstream htmlservers {
server 192.168.1.3:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.4:80 weight=2 max_fails=2 fail_timeout=2;
}

upstream phpservers {
server 192.168.1.3:80 weight=1 max_fails=2 fail_timeout=2;
server 192.168.1.4:80 weight=2 max_fails=2 fail_timeout=2;
}

server {
location ~* \.html$ {
proxy_pass http://htmlservers;
}

location ~* \.php$ {
proxy_pass http://phpservers;
}
}
}


二、Keepalived实现调度器HA
注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@master ~]# yum -y install ipvsadm keepalived
[root@backup ~]# yum -y install ipvsadm keepalived

2. Keepalived
Master
# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
router_id director1 //辅助改为director2
}

vrrp_instance VI_1 {
state BACKUP
nopreempt
interface eth0 //心跳接口,尽量单独连接心跳
virtual_router_id 80 //MASTER,BACKUP一致
priority 100 //辅助改为50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.80
}
}

BACKUP


3. 启动KeepAlived(主备均启动)
# chkconfig keepalived on
# service keepalived start
# ip addr


4. 扩展对调度器Nginx健康检查(可选)
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived

a. script
[root@master ~]# cat /etc/keepalived/check_nginx_status.sh
#!/bin/bash
/usr/bin/curl -I http://localhost &>/dev/null
if [ $? -ne 0 ];then
/etc/init.d/keepalived stop
fi

[root@master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

b. keepalived使用script
! Configuration File for keepalived

global_defs {
router_id director1
}

vrrp_script check_nginx {
script "/etc/keepalived/check_nginx_status.sh"
interval 5
}


vrrp_instance VI_1 {
state BACKUP
interface eth0
nopreempt
virtual_router_id 90
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass tianyun
}
virtual_ipaddress {
192.168.1.80
}

track_script {
check_nginx
}

}

--------------------
proxy_pass
后端服务器用的非php独立进程
apache+php模块
fastcgi_pass
后端服务器用的是php-fpm
php-fpm(fastcgi形式的php)

后端服务器部署详细过程:
安装软件:
# yum install nginx php php-fpm -y
# vim /etc/nginx/nginx.conf //添加php配置
在server里面添加如下配置:
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}

修改php-fpm进程账户并开启php-fpm的进程: 端口是9000
#vim /etc/php-fpm.d/www.conf //修改如下参数,默认值是apache
user = nginx
group = nginx

为什么设置成nginx:
因为nginx.conf配置的账户为nginx

# systemctl start php-fpm


前端nginx反向代理服务器:
upstream web {
server 10.0.0.21;
server 10.0.0.22;
}
upstream phpserver {
server 10.0.0.23;
server 10.0.0.24;
}
#上面的配置写到http里面server外面

server {
listen 80;
server_name www.baidu.com;
location / { #html的配置
proxy_pass http://web;
}

location ~* \.php$ { #php的配置
proxy_pass http://phpserver;
}

补充:(php节点的安装)

#yum install epel-release -y
#yum nginx php php-fpm -y

php-fpm是独立的php进程 端口:9000





















你可能感兴趣的:(负载均衡)