Nginx错误汇总

服务器常见Nginx错误汇总

1.nginx:no live upstreams while connecting to upstream

no live upstreams while connecting to upstream

[error] 27212#0: *314 no live upstreams while connecting to   upstream, client: ip_address , server: example.com, request: "GET / HTTP/1.1", upstream: "http://example.com", host: "example.com", referrer: "http://example.com/mypages/"

    fail_timeout=15s其实就是如果上次请求发现服务无法正常返回,那么有15s的时间该server会不可用,但是一旦超过15s请求也会再次转发到该server上的,不管该server到底有没有真正的恢复。

 upstream example.com  {
    #  ip_hash;
      server php01 max_fails=3 fail_timeout=15s;
      server php02 max_fails=3 fail_timeout=15s;
    }

    server {
      listen IP:80;
      server_name example.com;
      access_log /var/log/nginx/example.com.access;
      error_log /var/log/nginx/example.com.error error;

     location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass  http://$server_name/$uri;
        proxy_cache_bypass $cookie_nocache $arg_nocache $arg_comment;
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_no_cache $cookie_nocache $arg_nocache $arg_comment;
        proxy_no_cache $http_pragma $http_authorization;
      }

    }

 

    如果你用过nginx plus其实你会发现nginx plus 提供的health_check机制更加强大,说几个关键词,你们自己去查! zone slow_start health_check match ! 这个slow_start其实就很好的解决了缓存预热的问题,比如nginx发现一台机器重启了,那么会等待slow_starts设定的时间才会再次发送请求到该服务器上,这就给缓存预热提供了时间。

 

 

2.nginx:connect() failed (110:connection timed out )while connecting to upstream

 

[error] upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: howtounix.info, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080", host: "howtounix.info", referrer: "requested_url"

    nginx配置如下: 

    

worker_processes 4;
pid /run/nginx.pid;
worker_rlimit_nofile 16384;

events {
      worker_connections 10000;
      # multi_accept on;
}

server {
    listen       80;
    server_name  howtounix.info;
 
    location / {
        ...
        proxy_read_timeout 60;
        ...
    }
    ...
}

    出现这种问题意味着,Web服务在60内没有获得相应,导致超时。如修改为120则表示Nginx等待相应时间为120秒,默认为60秒。

 

    服务器文件数配置:

 

#########
 ulimit -a  
core file size              (blocks, -c) 0  
data seg size               (kbytes, -d) unlimited  
scheduling priority                 (-e) 0  
file size               (blocks, -f) unlimited  
pending signals                 (-i) 32063  
max locked memory       (kbytes, -l) 64  
max memory size         (kbytes, -m) unlimited  
open files                      (-n) 65536  
pipe size            (512 bytes, -p) 8  
POSIX message queues     (bytes, -q) 819200  
real-time priority              (-r) 0   
stack size              (kbytes, -s) 8192  
cpu time               (seconds, -t) unlimited  
max user processes              (-u) 32063  
virtual memory          (kbytes, -v) unlimited  
file locks                      (-x) unlimited  

   

 

    Tomcat相关配置

maxThreads="500" minSpareThreads="150" acceptCount="250" acceptorThreadCount="2"

 

    优化方案:

     1.调整tcp_fin_timeout 

       2.调整端口范围

       3.增加Nginx服务器数据量,减少单台Nginx服务器接受请求数

 

    查看当前接口范围     

$ sysctl net.ipv4.ip_local_port_range

    输出结果如下    

net.ipv4.ip_local_port_range = 32768    61000

    设置新的端口范围   

# echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range

    或者使用如下方式  

$ sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"

    编辑/etc/sysctl.conf 文件,使修改结果永久生效,在文件末尾添加

# increase system IP port limits
net.ipv4.ip_local_port_range = 1024 65535

   查看连接情况

    netstat | grep 'TIME_WAIT' |wc -l

如果出现大量的Time_Wait不必担心,可能问题在于存在大量的短连接,短时间内使用后快速关闭   

cat /proc/sys/net/ipv4/tcp_fin_timeout

echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout

或者/etc/sysctl.conf文件尾部添加
net.ipv4.tcp_fin_timeout=30
如果服务器不使用RPC服务或者NFS将他们关闭
/etc/init.d/nfsd stop
chkconfig nfsd off

 

你可能感兴趣的:(Nginx错误汇总)