Nginx处理用户请求的静态页面,tomcat处理用户请求jsp页面,来实现动态分离,前端nginx反向代理后端nginx+tomcat集群,实现负载均衡,这样一来就能更好的提高并发,处理性能,并隐藏后端,提高安全
环境:
前端: Centos 192.168.0.211: nginx + Ngx_cache_purge
后端1: Centos 192.168.0.222: nginx + Tomcat
后端2: Centos 192.168.0.223:nginx + Tomcat
使用软件:
Nginx: http://nginx.org/en/download.html
JDK: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Tomcat: http://tomcat.apache.org/download-80.cgi
首先配置后端Tomcat:
1,JDK 配置:
[root@Tomcat ~]# tar zxf jdk-8u40-linux-i586.tar.gz [root@Tomcat ~]# mv jdk1.8.0_40/ /usr/local/jdk [root@Tomcat ~]# vi /etc/profile JAVA_HOME=/usr/local/jdk PATH=$PATH:$JAVA_HOME/bin CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export JAVA_HOME PATHCLASSPATH [root@Tomcat ~]# source /etc/profile [root@Tomcat ~]# java -version #显示版本说明成功 java version"1.8.0_40"
2,Tomcat 安装
先创建普通用户,用来运行tomcat,如遇权限问题,可以先关闭selinux
[root@Tomcat ~]# useradd -s /sbin/nologin tomcat [root@Tomcat ~]# passwd tomcat [root@Tomcat ~]# tar zxf apache-tomcat-8.0.21.tar.gz [root@Tomcat ~]# mv apache-tomcat-8.0.21 /usr/local/tomcat [root@Tomcat ~]# chown tomcat.tomcat -R /usr/local/tomcat [root@Tomcat ~]# su - tomcat /usr/local/tomcat/bin/startup.sh [root@Tomcat ~]# echo "su - tomcat /usr/local/tomcat/bin/startup.sh" >> /etc/rc.local #开机启动
3, 安装Nginx
[root@Tomcat ~]# useradd -s /sbin/nologin www [root@Tomcat ~]# yum install –y make zlib-devel openssl-devel pcre pcre-devel [root@Tomcat ~]# tar zxvf nginx-1.4.4.tar.gz [root@Tomcat ~]# cd nginx-1.4.4 [root@Tomcat nginx-1.4.4]# ulimit -SHn 51200 [root@Tomcat nginx-1.4.4]# ./configure --user=www --group=www --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module [root@Tomcat nginx-1.4.4]# make && make install [root@Tomcat nginx-1.4.4]# \cp -pa /usr/local/nginx/sbin/nginx /etc/init.d/ [root@Tomcat nginx-1.4.4]# chmod +x /etc/init.d/nginx [root@Tomcat nginx-1.4.4]# echo "ulimit -SHn 51200" >> /etc/rc.d/rc.local [root@Tomcat nginx-1.4.4]# echo "/etc/init.d/nginx" >> /etc/rc.d/rc.local #开机启动
4,主配置文件 nginx.conf
user www www; worker_processes 1; #跟服务器cpu一致就可以了,不要超过cpu的的内核个数,超过将会增加服务器负荷 error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; server_name_in_redirect off; client_max_body_size 10m; #允许客户端请求的最大单文件字节数 client_body_buffer_size 128k; #缓冲区代理缓冲用户端请求的最大字节数 sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 0; keepalive_timeout 60; set_real_ip_from 192.168.0.0/24; #允许被信任ip段 real_ip_header X-Real-IP; #获取前端访问真实ip gzip on; #开启gzip压缩 gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 3; gzip_disable "MSIE [1-6]."; gzip_types text/plain application/x-javascript text/css application/xml image/jpeg image/gif image/png; gzip_vary on; gzip_proxied any; proxy_redirect off; proxy_connect_timeout 300; #nginx跟Tomcat连接超时时间(代理连接超时) proxy_send_timeout 300; #连接成功后,后端服务响应时间(代理发送超时) proxy_read_timeout 300; #连接成功后,后端服务响应时间(代理接收超时) proxy_buffer_size 4k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小 proxy_buffers 6 64k; #proxy_buffers缓冲区,网页平均在64k以下的话,这样设置 proxy_busy_buffers_size 128k; #高负荷下缓冲大小(proxy_buffers*2) proxy_temp_file_write_size 64k; #设定缓存文件夹大小 proxy_set_header Host $host; #后端的Web服务器可以通过X-Forwarded-For获取用户真实IP proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; open_file_cache max=204800 inactive=20s; open_file_cache_min_uses 1; open_file_cache_valid 30s; include vhost/*.conf; }
5, 配置站点文件: tomcat.conf
[root@Tomcat conf]# mkdir vhost [root@Tomcat conf]# cd vhost/ [root@Tomcat vhost]# vi tomcat.conf
upstream tomcat_server { server 192.168.0.222:8080; } server { listen 80; server_name 192.168.0.222; root /usr/local/tomcat/webapps/ROOT/; #同tomcat一致 index index.html index.jsp index.php; location ~ .*.jsp$ { proxy_next_upstream http_503 http_500 http_502 error timeout invalid_header; proxy_pass http://tomcat_server; } #这里使用的tomcat安装环境,下面是为了登入tomcat管理 location ~ /manager/ { proxy_pass http://tomcat_server; } location ~ /host-manager/ { proxy_pass http://tomcat_server; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; }
=======================
上面的配置已经可以实现Nginx + Tomcat的动静分离,另一台Tomcat也是这样配置,上面我并没有对静态文件进行Cache,这是有道理的, 我打算把静态文件在前端进行cache
1,如果后端也进行cache, 更新文件的时候,后端和前端都要进行清除cache,这样显得麻烦
2,如果后端也进行cache,必要到静态文件的location中添加proxy_pass代理tomcat,这样却达不到动静分离,不添加proxy_pass, 却不能使用purge命中清除cache,当然也可以些shell运行,手动清除
=======================
配置前端: Centos 192.168.0.211: nginx + Ngx_cache_purge
前端的nginx配置与后端差不多,主要是编译安装的时候添加cache模块,基础看后端安装,不多介绍了
1,先解压nginx 和 Ngx_cache_purge,主要不同是安装的时候添加了ngx_cache_purge模块
[root@Nginx-C opt]# tar zxf ngx_cache_purge-2.0.tar.gz [root@Nginx-C opt]# tar zxf nginx-1.4.3.tar.gz [root@Nginx-C opt]# cd nginx-1.4.3 [root@Nginx-C nginx-1.4.3]# ./configure --user=www --group=www --add-module=../ngx_cache_purge-2.0 --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module [root@Nginx-C nginx-1.4.3]# make && make install
2,配置主配置文件 nginx.conf
user www www; worker_processes 1; error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; pid logs/nginx.pid; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; server_name_in_redirect off; client_max_body_size 10m; client_body_buffer_size 128k; sendfile on; tcp_nopush on; tcp_nodelay on; #keepalive_timeout 0; keepalive_timeout 60; gzip on; gzip_min_length 1k; gzip_buffers 4 16k; gzip_http_version 1.0; gzip_comp_level 3; gzip_disable "MSIE [1-6]."; gzip_types text/plain application/x-javascript text/css application/xml image/jpeg image/gif image/png; #添加图片压缩 gzip_vary on; gzip_proxied any; proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; proxy_buffer_size 64k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #传递真实ip给后端 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; open_file_cache max=204800 inactive=20s; open_file_cache_min_uses 1; open_file_cache_valid 30s; proxy_cache_path /cache/proxy_cache levels=1:2 keys_zone=cache_one:100m inactive=1d max_size=30g; #100m和30G,按照服务要求,适当增大 proxy_temp_path /cache/proxy_temp; include vhost/*.conf; }
3, 配置站点文件
upstream tomcat_server { server 192.168.0.222 weight=1 max_fails=2 fail_timeout=30s; #添加ip,不是tomcat的8080端口 server 192.168.0.223 weight=1 max_fails=2 fail_timeout=30s; } server { listen 80; server_name 192.168.0.211; index index.html index.jsp index.php; location / { proxy_next_upstream http_503 http_500 http_502 error timeout invalid_header; proxy_cache cache_one; add_header Nginx-Cache "$upstream_cache_status"; proxy_cache_key $host$uri$is_args$args; proxy_set_header Accept-Encoding ""; proxy_pass http://tomcat_server; proxy_cache_valid 200 304 12h; proxy_cache_valid 301 302 1m; proxy_cache_valid any 1m; expires 1d; } #jsp,do文件不进行cache location ~ .*\.(jsp|do)$ { proxy_set_header Accept-Encoding ""; #只添加了一个, 其他的都添加到主配置文件了,以后添加站点不用在重复写 proxy_pass http://tomcat_server; } location ~ /purge(/.*) { allow 127.0.0.1; allow 192.168.0.0/24; deny all; proxy_cache_purge cache_one $host$1$is_args$args; } location /ngx_status { stub_status on; access_log off; allow 127.0.0.1; allow 192.168.0.0/24; #自己的ip地址 deny all; } }
最后是性能测试
这里使用的是ab压力测试工具,后面会介绍如何单独安装ab测试工具
1,前端
[root@Tomcat ~]# ab -c 1000 -n 4000 http://192.168.0.211/docs/security-howto.html ==== Requests per second: 3304.24 [#/sec] (mean) Time per request: 302.642 [ms] (mean) Time per request: 0.303 [ms] (mean, across all concurrent requests) Transfer rate: 110426.03 [Kbytes/sec] received
2,直接测试后端
[root@Nginx-C vhost]# ab -c 1000 -n 4000 http://192.168.0.222/docs/security-howto.html ==== Requests per second: 3416.84 [#/sec] (mean) Time per request: 292.668 [ms] (mean) Time per request: 0.293 [ms] (mean, across all concurrent requests) Transfer rate: 114681.80 [Kbytes/sec] received
3,直接测试tomcat
[root@Nginx-C vhost]# ab -c 1000 -n 4000 http://192.168.0.222:8080/docs/security-howto.html ==== Requests per second: 1995.18 [#/sec] (mean) Time per request: 501.209 [ms] (mean) Time per request: 0.501 [ms] (mean, across all concurrent requests) Transfer rate: 66449.32 [Kbytes/sec] received
前端的压力测试比nginx+tomcat动静分离要小点,但是实现了负载, 明显要比tomcat单独处理要强很多