Nginx ("engine x")
是一个高性能的HTTP和反向代理服务器,特点是占有内存少,并发能pcre-xxx.tar.gz
,openssl-xxx.tar.gz
,zlib-xxx.tar.gz
,nginx-xxx.tar.gz
wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz
解压文件,
./configure完成后,回到pcre目录下执行make,
再执行make install
yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
第四步,安装 nginx
查看开放的端口号
firewall-cmd --list-all
firewall-cmd --add-service=http –permanent
sudo firewall-cmd --add-port=80/tcp --permanent
firewall-cmd –reload
1. 启动命令
在/usr/local/nginx/sbin目录下执行 ./nginx
2. 关闭命令
在/usr/local/nginx/sbin目录下执行 ./nginx -s stop
3. 重新加载命令
在/usr/local/nginx/sbin目录下执行 ./nginx -s reload
nginx 安装目录下,其默认的配置文件都放在这个目录的 conf 目录下,而主配置文件
nginx.conf 也在其中,后续对 nginx 的使用基本上都是对此配置文件进行相应的修改
配置文件中有很多#, 开头的表示注释内容,内容如下:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
根据上述文件,我们可以很明显的将 nginx.conf 配置文件分为三部分:
第一部分:全局块
worker_processes 1;
第二部分:events块
events 块涉及的指令主要影响 Nginx 服务器与用户的网络连接,常用的设置包括是否开启对多 work process下的网络连接进行序列化,是否允许同时接收多个网络连接,选取哪种事件驱动模型来处理连接请求,每个 word process 可以同时支持的最大连接数等。
上述例子就表示每个 work process 支持的最大连接数为 1024.
这部分的配置对 Nginx 的性能影响较大,在实际中应该灵活配置。
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
http://www. 123 .com
直接跳转到127.0.0.1:8080
www.123.com
,不加端口号时默认为 80
端口,故127.0.0.1:8080
路径上。在浏览器端输入 http://www.123.com
server {
listen 80;
server_name www.123.com #配置域名
location / {
proxy_pass http://127.0.0.1:8080; #配置访问路径
index index.html index.htm;
}
}
http://127.0.0.1:9001/edu/
直接跳转到127.0.0.1:8081
http://127.0.0.1:9001/vod/
直接跳转到127.0.0.1:8082
server{}
server {
listen 9001;
server_name localhost;
location ~ /edu/ {
proxy_pass http://localhost:8001;
}
location ~ /vod/ {
proxy_pass http://localhost:8002;
}
}
=
:用于不含正则表达式的 uri 前,要求请求字符串与 uri 严格匹配,如果匹配~
:用于表示 uri 包含正则表达式,并且区分大小写。~*
:用于表示 uri 包含正则表达式,并且不区分大小写。^~
:用于不含正则表达式的 uri 前,要求 Nginx 服务器找到标识 uri 和请求字符串匹配度最高的 location 后,立即使用此 location 处理请求,而不再使用 location块中的正则 uri 和请求字符串做匹配。~
或者 ~*
标识。location [ = | ~ | ~* | ^~ ] uri {
}
http {
......
upstream myserver{
ip_hash;
server 115.28.52.63:8080 weight=1;
server 115.28.52.63:8180 weight=1;
server {
location / {
......
proxy_pass http://myserver;
proxy_pass_connect_timeout 10;
}
......
}
}
轮询(默认)
每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。
weight
weight代表权,重默认为1,权重越高被分配的客户端越多
指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。 例如:
ip_hash
每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。 例如:
fair(第三方)
按后端服务器的响应时间来分配请求,响应时间短的优先分配。
1.weight
upstream server_pool{
server 192.168.5.21 weight= 10 ;
server 192.168.5.22 weight= 10 ;
}
2.ip_hash
upstream server_pool{
ip_hash;
server 192.168.5.21:80;
server 192.168.5.22:80;
}
3.fair
upstream server_pool{
server 192.168.5.21:80;
server 192.168.5.22:80;
fair;
}
/conf/nginx.conf
配置文件,http {
server {
listen 80;
server_name 192.168.17.129;
location /www/ {
root /data/;
index index.html index.htm;
}
location /image/ {
root /data/ ;
autoindex on;
}
}
}
添加监听端口、访问名字
worker_processes 4
#work绑定cpu(4 work绑定4cpu)。
worker_cpu_affinity 0001 0010 0100 1000
#work绑定cpu (4 work绑定8cpu中的 4 个) 。
worker_cpu_affinity 0000001 00000010 00000100 00001000
详情见配置文件 nginx.conf
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.17.
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_http_port {
script "/usr/local/src/nginx_check.sh"
interval 2 #(检测脚本执行的间隔)
weight 2
}
vrrp_instance VI_1 {
state BACKUP # 备份服务器上将 MASTER 改为 BACKUP
interface ens33 //网卡
virtual_router_id 51 # 主、备机的virtual_router_id必须相同
priority 100 # 主、备机取不同的优先级,主机值较大,备份机值较小
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.17.50 // VRRP H虚拟地址
}
}
#!/bin/bash
A=ps -C nginx –no-header |wc -l
if [ $A -eq 0 ];then
/usr/local/nginx/sbin/nginx
sleep 2
if [ ps -C nginx --no-header |wc -l -eq 0 ];then
killall keepalived
fi
fi
systemctl stop firewalld //关闭防火墙
sed - i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux //关闭selinux,重启
生效
setenforce 0 //关闭selinux,临时生效
ntpdate 0 .centos.pool.ntp.org //时间同步
yum install nginx -y //安装nginx
# echo "hostname ifconfig ens33 |sed -n 's#.*inet \(.*\)netmask.*#\1#p'" >
/usr/share/nginx/html/index.html //准备测试文件,此处是将主机名和ip写到index.html页
面中
vim /etc/nginx/nginx.conf //编辑配置文件
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/.conf;
server {
listen 80;
server_name http://www.mtian.org;
location / {
root /usr/share/nginx/html;
}
access_log /var/log/nginx/access.log main;
}
}
systemctl start nginx //启动nginx
systemctl enable nginx //加入开机启动
vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
upstream backend {
server 192.168.1.33:80 weight=1 max_fails=3 fail_timeout=20s;
server 192.168.1.34:80 weight=1 max_fails=3 fail_timeout=20s;
}
server {
listen 80;
server_name http://www.mtian.org;
location / {
proxy_pass http://backend;
proxy_set_header Host $host:$proxy_port;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
}
systemctl start nginx //启动nginx
systemctl enable nginx //加入开机自启动
[root@node01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.32 http://www.mtian.org
192.168.1.31 http://www.mtian.org
// 测试时候轮流关闭lb1 和 lb2 节点,关闭后还是能够访问并看到轮循效果即表示 nginx lb集群搭建
成功。
[root@node01 ~]# curl http://www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl http://www.mtian.org
web02 192.168.1.34
[root@node01 ~]# curl http://www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl http://www.mtian.org
web02 192.168.1.34
[root@node01 ~]# curl http://www.mtian.org
web01 192.168.1.33
[root@node01 ~]# curl http://www.mtian.org
web02 192.168.1.34
yum install keepalived -y
[root@LB- 01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
381347268 @qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.110/24 dev ens33 label ens33:1
}
}
[root@LB- 01 ~]# systemctl start keepalived //启动keepalived
[root@LB- 01 ~]# systemctl enable keepalived //加入开机自启动
[root@LB- 01 ~]# ip a //查看IP,会发现多出了VIP 192.168.1.110
......
2 : ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00 :0c: 29 : 94 : 17 : 44 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.31/ 24 brd 192.168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.110/24 scope global secondary ens33:1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe94: 1744 / 64 scope link
valid_lft forever preferred_lft forever
......
[root@LB- 02 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
381347268 @qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.110/24 dev ens33 label ens33:1
}
}
[root@LB- 02 ~]# systemctl start keepalived //启动keepalived
[root@LB- 02 ~]# systemctl enable keepalived //加入开机自启动
[root@LB- 02 ~]# ifconfig //查看IP,此时备节点不会有VIP(只有当主挂了的时候,VIP才会飘到备
节点)
ens33: flags= 4163 <UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.32 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:feab: 6532 prefixlen 64 scopeid 0x20
ether 00 :0c: 29 :ab: 65 : 32 txqueuelen 1000 (Ethernet)
RX packets 43752 bytes 17739987 (16.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4177 bytes 415805 (406.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
......
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.11 0
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
//关闭LB- 01 节点上面keepalived主节点。再次访问
[root@LB- 01 ~]# systemctl stop keepalived
[root@node01 ~]#
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192. 168 .1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
//此时查看LB- 01 主节点上面的IP ,发现已经没有了 VIP
[root@LB- 01 ~]# ifconfig
ens33: flags= 4163 <UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.31 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fe94: 1744 prefixlen 64 scopeid 0x20
ether 00 :0c: 29 : 94 : 17 : 44 txqueuelen 1000 (Ethernet)
RX packets 46813 bytes 18033403 (17.1 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9350 bytes 1040882 (1016.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
//查看LB- 02 备节点上面的IP,发现 VIP已经成功飘过来了
[root@LB- 02 ~]# ifconfig
ens33: flags= 4163 <UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.32 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:feab: 6532 prefixlen 64 scopeid 0x20
ether 00 :0c: 29 :ab: 65 : 32 txqueuelen 1000 (Ethernet)
RX packets 44023 bytes 17760070 (16.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4333 bytes 430037 (419.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: 1 : flags= 4163 <UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.110 netmask 255.255.255.0 broadcast 0.0.0.0
ether 00 :0c: 29 :ab: 65 : 32 txqueuelen 1000 (Ethernet)
...
[root@LB- 01 ~]# vim /etc/keepalived/keepalived.conf //编辑配置文件,增加一段新的
vrrp_instance规则
! Configuration File for keepalived
global_defs {
notification_email {
381347268 @qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.110/ 24 dev ens33 label ens33: 1
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 52
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.1.210/24 dev ens33 label ens33:2
}
}
[root@LB- 01 ~]# systemctl restart keepalived //重新启动keepalived
// 查看LB- 01 节点的IP地址,发现VIP(192.168.1.110)同样还是默认在该节点
[root@LB- 01 ~]# ip a
2 : ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00 :0c: 29 : 94 : 17 : 44 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.31/ 24 brd 192 .168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.110/ 24 scope global secondary ens33: 1
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe94: 1744 / 64 scope link
valid_lft forever preferred_lft forever
[root@LB- 02 ~]# vim /etc/keepalived/keepalived.conf //编辑配置文件,增加一段新的
vrrp_instance规则
! Configuration File for keepalived
global_defs {
notification_email {
381347268 @qq.com
}
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.110/ 24 dev ens33 label ens33: 1
}
}
vrrp_instance VI_2 {
state MASTER
interface ens33
virtual_router_id 52
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
192.168.1.210/24 dev ens33 label ens33:2
}
}
[root@LB- 02 ~]# systemctl restart keepalived //重新启动keepalived
// 查看LB- 02 节点IP,会发现也多了一个VIP(192.168.1.210),此时该节点也就是一个主了。
[root@LB- 02 ~]# ip a
2 : ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
———————————————————————————
30
link/ether 00 :0c: 29 :ab: 65 : 32 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.32/ 24 brd 192.168.1.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.1.210/ 24 scope global secondary ens33: 2
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feab: 6532 / 64 scope link
valid_lft forever preferred_lft forever
( 3 )测试
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192. 168 .1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.210
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.210
web02 192.168.1.34
// 停止LB- 01 节点的keepalived再次测试
[root@LB- 01 ~]# systemctl stop keepalived
[root@node01 ~]# curl 192.168.1.110
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.110
web02 192.168.1.34
[root@node01 ~]# curl 192.168.1.210
web01 192.168.1.33
[root@node01 ~]# curl 192.168.1.210
web02 192.168.1.34