nginx apache_Apache vs Nginx性能:优化技术

nginx apache

Some years ago, the Apache Foundation’s web server, known simply as “Apache”, was so ubiquitous that it became synonymous with the term “web server”. Its daemon process on Linux systems has the name httpd (meaning simply http process) — and comes preinstalled in major Linux distributions.

几年前, Apache Foundation的Web服务器 (简称为“ Apache”)无处不在,以至于成为“ Web服务器”一词的同义词。 它在Linux系统上的守护进程名称为httpd (简称为http process ),并且已预先安装在主要的Linux发行版中。

It was initially released in 1995, and, to quote Wikipedia, “it played a key role in the initial growth of the World Wide Web”. It is still the most-used web server software according to W3techs. However, according to those reports which show some trends of the last decade and comparisons to other solutions, its market share is decreasing. The reports given by Netcraft and Builtwith differ a bit, but all agree on a trending decline of Apache’s market share and the growth of Nginx.

它最初于1995年发布,并引用Wikipedia的话说 : “它在万维网的最初发展中发挥了关键作用” 。 根据W3techs,它仍然是最常用的Web服务器软件。 但是,根据那些显示了过去十年的趋势以及与其他解决方案进行比较的报告,其市场份额正在下降。 Netcraft和Builtwith给出的报告略有不同,但是所有人都同意Apache的市场份额呈下降趋势以及Nginx的增长。

Nginx — pronounced engine x — was released in 2004 by Igor Sysoev, with the explicit intent to outperform Apache. Nginx’s website has an article worth reading which compares these two technologies. At first, it was mostly used as a supplement to Apache, mostly for serving static files, but it has been steadily growing, as it has been evolving to deal with the full spectrum of web server tasks.

Nginx (发音为engine x)由Igor Sysoev于2004年发布,其明确意图是超越Apache。 Nginx的网站上有一篇值得阅读的文章 ,比较了这两种技术。 最初,它主要用作Apache的补充,主要用于提供静态文件,但是随着处理各种Web服务器任务的发展,它一直在稳步增长。

It is often used as a reverse proxy, load balancer, and for HTTP caching. CDNs and video streaming providers use it to build their content delivery systems where performance is critical.

它通常用作反向代理 , 负载平衡器和HTTP缓存 。 CDN和视频流提供商使用它来构建其性能至关重要的内容交付系统。

Apache has been around for a long time, and it has a big choice of modules. Managing Apache servers is known to be user-friendly. Dynamic module loading allows for different modules to be compiled and added to the Apache stack without recompiling the main server binary. Oftentimes, modules will be in Linux distro repositories, and after installing them through system package managers, they can be gracefully added to the stack with commands like a2enmod. This kind of flexibility has yet to be seen with Nginx. When we look at a guide for setting up Nginx for HTTP/2, modules are something Nginx needs to be built with — configured for at build-time.

Apache已经存在很长时间了,它有很多模块可供选择 。 众所周知,管理Apache服务器是用户友好的。 动态模块加载允许在不重新编译主服务器二进制文件的情况下,将不同的模块编译并添加到Apache堆栈中。 通常,模块将位于Linux发行版存储库中,并且通过系统软件包管理器安装它们之后,可以使用a2enmod之类的命令将它们优雅地添加到堆栈中。 Nginx还没有看到这种灵活性。 当我们查看有关为HTTP / 2设置Nginx的指南时,需要使用Nginx构建模块,并在构建时进行配置。

One other feature that has contributed to Apache’s market rule is the .htaccess file. It is Apache’s silver bullet, which made it a go-to solution for the shared hosting environments, as it allows controlling the server configuration on a directory level. Every directory on a server served by Apache can have its own .htaccess file.

.htaccess文件是促使Apache遵循市场规则的另一个功能。 这是Apache的灵丹妙药,这使其成为共享托管环境的首选解决方案,因为它允许在目录级别上控制服务器配置。 Apache服务的服务器上的每个目录都可以拥有自己的.htaccess文件。

Nginx not only has no equivalent solution, but discourages such usage due to performance hits.

Nginx不仅没有等效的解决方案,而且由于性能下降而不鼓励这种用法。

nginx apache_Apache vs Nginx性能:优化技术_第1张图片

Server vendors market share 1995–2005. Data by Netcraft

1995-2005年服务器厂商的市场份额。 Netcraft的数据

LiteSpeed, or LSWS, is one server contender that has a level of flexibility that can compare to Apache, while not sacrificing performance. It supports Apache-style .htaccess, mod_security and mod_rewrite, and it’s worth considering for shared setups. It was planned as a drop-in replacement for Apache, and it works with cPanel and Plesk. It’s been supporting HTTP/2 since 2015.

LiteSpeed或LSWS是一种服务器竞争者,具有一定程度的灵活性,可以与Apache相比,而不会牺牲性能。 它支持Apache风格的.htaccessmod_securitymod_rewrite ,对于共享设置值得考虑。 它计划作为Apache的直接替代品,并且可以与cPanel和Plesk一起使用。 自2015年以来,它一直支持HTTP / 2。

LiteSpeed has three license tiers, OpenLiteSpeed, LSWS Standard and LSWS Enterprise. Standard and Enterprise come with an optional caching solution comparable to Varnish, LSCache, which is built into the server itself, and can be controlled, with rewrite rules, in .htaccess files (per directory). It also comes with some DDOS-mitigating “batteries” built in. This, along with its event-driven architecture, makes it a solid contender, targeting primarily performance-oriented hosting providers, but it could be worth setting up even for smaller servers or websites.

LiteSpeed具有三个许可证层 ,即OpenLiteSpeed,LSWS Standard和LSWS Enterprise。 Standard和Enterprise 带有与Varnish类似的可选缓存解决方案LSCache, 它内置在服务器本身中 ,并且可以通过重写规则在.htaccess文件(按目录)中进行控制。 它还带有一些内置DDOS缓解功能的“电池” 。 加上事件驱动的体系结构,使其成为主要针对性能导向的托管服务提供商的有力竞争者,但即使对于小型服务器或网站,也值得设置。

硬件注意事项 (Hardware Considerations)

When optimizing our system, we cannot emphasize enough giving due attention to our hardware setup. Whichever of these solutions we choose for our setup, having enough RAM is critical. When a web server process, or an interpreter like PHP, don’t have enough RAM, they start swapping, and swapping effectively means using the hard disk to supplement RAM memory. The effect of this is increased latency every time this memory is accessed. This takes us to the second point — the hard disk space. Using fast SSD storage is another critical factor of our website speed. We also need to mind the CPU availability, and the physical distance of our server’s data centers to our intended audience.

优化我们的系统时,我们不能强调足够的注意力来适当注意我们的硬件设置。 无论是选择哪种解决方案进行安装,拥有足够的RAM都是至关重要的。 当Web服务器进程或类似PHP的解释器没有足够的RAM时,它们开始交换,而有效交换意味着使用硬盘来补充RAM内存。 这样做的效果是每次访问该内存时都会增加延迟。 这将我们带到第二点-硬盘空间。 使用快速SSD存储是我们网站速度的另一个关键因素。 我们还需要注意CPU的可用性,以及服务器数据中心与目标受众之间的物理距离。

To dive in deeper into the hardware side of performance tuning, Dropbox has a good article.

要深入研究性能调整的硬件方面, Dropbox有一篇不错的文章 。

监控方式 (Monitoring)

One practical way to monitor our current server stack performance, per process in detail, is htop, which works on Linux, Unix and macOS, and gives us a colored overview of our processes.

htop是一种用于详细监控每个进程当前服务器堆栈性能的实用方法,它可以在Linux,Unix和macOS上运行,并为我们提供了彩色的进程概览。

nginx apache_Apache vs Nginx性能:优化技术_第2张图片

Other monitoring tools are New Relic, a premium solution with a comprehensive set of tools, and Netdata, an open-source solution which offers great extensibility, fine-grained metrics and a customizable web dashboard, suitable for both little VPS systems and monitoring a network of servers. It can send alarms for any application or system process via email, Slack, pushbullet, Telegram, Twilio etc.

其他监视工具包括: New Relic (具有全套工具的高级解决方案)和Netdata (开放源代码的解决方案,它提供了出色的可扩展性,细粒度的度量标准和可自定义的Web仪表板,适用于小型VPS系统和监视网络)服务器。 它可以通过电子邮件,Slack,pushbullet,Telegram,Twilio等为任何应用程序或系统进程发送警报。

nginx apache_Apache vs Nginx性能:优化技术_第3张图片

Monit is another, headless, open-source tool which can monitor the system, and can be configured to alert us, or restart certain processes, or reboot the system when some conditions are met.

Monit是另一个无头的开源工具,可以监视系统,并且可以配置为向我们发出警报,或重新启动某些进程,或在满足某些条件时重新启动系统。

测试系统 (Testing the System)

AB — Apache Benchmark — is a simple load-testing tool by Apache Foundation, and Siege is another load-testing program. This article explains how to set them both up, and here we have some more advanced tips for AB, while an in-depth look at Siege can be found here.

AB (Apache Benchmark)是Apache Foundation提供的一种简单的负载测试工具, Siege是另一个负载测试程序。 本文介绍了如何同时设置它们, 在这里我们为AB提供了一些更高级的技巧,而在这里可以找到关于Siege的深入了解。

If you prefer a web interface, there is Locust, a Python-based tool that comes in very handy for testing website performance.

如果您喜欢Web界面,则可以使用Locust ,这是一个基于Python的工具,可以非常方便地测试网站性能。

nginx apache_Apache vs Nginx性能:优化技术_第4张图片

After we install Locust, we need to create a locustfile in the directory from which we will launch it:

在安装Locust之后,我们需要在将要启动它的目录中创建一个locustfile :

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):
    @task(1)
    def index(self):
        self.client.get("/")

    @task(2)
    def shop(self):
        self.client.get("/?page_id=5")

    @task(3)
    def page(self):
        self.client.get("/?page_id=2")

class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 300
    max_wait = 3000

Then we simply launch it from the command line:

然后我们只需从命令行启动它:

locust --host=https://my-website.com

One warning with these load-testing tools: they have the effect of a DDoS attack, so it’s recommended you limit testing to your own websites.

这些负载测试工具的一个警告是:它们具有DDoS攻击的效果,因此建议您将测试限制在自己的网站上。

调优Apache (Tuning Apache)

Apache的MPM模块 (Apache’s mpm modules)

Apache dates to 1995 and the early days of the internet, when an accepted way for servers to operate was to spawn a new process on each incoming TCP connection and to reply to it. If more connections came in, more worker processes were created to handle them. The costs of spawning new processes were high, and Apache developers devised a prefork mode, with a pre-spawned number of processes. Embedded dynamic language interpreters within each process (like mod_php) were still costly, and server crashes with Apache’s default setups became common. Each process was only able to handle a single incoming connection.

Apache的历史可以追溯到1995年和Internet的早期,当时服务器可以接受的一种操作方式是在每个传入的TCP连接上产生一个新进程并对其进行回复。 如果有更多的连接进入,则会创建更多的工作进程来处理它们。 产生新进程的成本很高,Apache开发人员设计了一种预产生分支的模式,并带有预先产生的多个进程。 每个进程中的嵌入式动态语言解释器(如mod_php )仍然很昂贵,并且Apache的默认设置导致服务器崩溃变得很普遍。 每个进程只能处理一个传入连接。

This model is known as mpm_prefork_module within Apache’s MPM (Multi-Processing Module) system. According to Apache’s website, this mode requires little configuration, because it is self-regulating, and most important is that the MaxRequestWorkers directive be big enough to handle as many simultaneous requests as you expect to receive, but small enough to ensure there’s enough physical RAM for all processes.

该模型在Apache的MPM (多处理模块)系统中被称为mpm_prefork_module 。 根据Apache的网站 ,此模式几乎不需要配置,因为它是自我调节的, 最重要的是MaxRequestWorkers指令足够大,可以处理您希望接收的并发请求,但又足够小,可以确保有足够的物理RAM对于所有过程

nginx apache_Apache vs Nginx性能:优化技术_第5张图片

A small Locust load test that shows spawning of huge number of Apache processes to handle the incoming traffic.

一次小型的Locust负载测试,显示了产生大量Apache进程来处理传入流量的过程。

We may add that this mode is maybe the biggest cause of Apache’s bad name. It can get resource-inefficient.

我们可能还要补充一点,这种模式可能是Apache坏名声的最大原因。 它可能会导致资源效率低下。

Version 2 of Apache brought another two MPMs that try to solve the issues that prefork mode has. These are worker module, or mpm_worker_module, and event module.

Apache的版本2带来了另外两个MPM,它们试图解决prefork模式所具有的问题。 这些是工作程序模块mpm_worker_module事件模块

Worker module is not process-based anymore; it’s a hybrid process-thread based mode of operation. Quoting Apache’s website,

Worker模块不再基于流程; 这是基于混合进程线程的操作模式。 引用Apache的网站 ,

a single control process (the parent) is responsible for launching child processes. Each child process creates a fixed number of server threads as specified in the ThreadsPerChild directive, as well as a listener thread which listens for connections and passes them to a server thread for processing when they arrive.

单个控制进程(父进程)负责启动子进程。 每个子进程都会创建ThreadsPerChild指令中指定的固定数量的服务器线程,以及侦听器线程,该线程侦听连接并将连接到达时将其传递给服务器线程进行处理。

This mode is more resource efficient.

此模式更节省资源。

2.4 version of Apache brought us the third MPM — event module. It is based on worker MPM, and added a separate listening thread that manages dormant keepalive connections after the HTTP request has completed. It’s a non-blocking, asynchronous mode with a smaller memory footprint. More about version 2.4 improvements here.

Apache的2.4版本为我们带来了第三个MPM — 事件模块 。 它基于工作程序MPM,并添加了一个单独的侦听线程,该线程在HTTP请求完成后管理Hibernate的keepalive连接。 这是一种具有较小内存占用的非阻塞异步模式。 更多关于2.4版本的改进在这里 。

We have loaded a testing WooCommerce installation with around 1200 posts on a virtual server and tested it on Apache 2.4 with the default, prefork mode, and mod_php.

我们已经在虚拟服务器上加载了约1200个帖子的WooCommerce测试安装,并在Apache 2.4上使用默认,prefork模式和mod_php对其进行了测试。

First we tested it with libapache2-mod-php7 and mpm_prefork_module at https://tools.pingdom.com:

首先,我们在https://tools.pingdom.com上使用libapache2-mod-php7和mpm_prefork_module对它进行了测试:

nginx apache_Apache vs Nginx性能:优化技术_第6张图片

Then, we went for testing the event MPM module.

然后,我们去测试事件MPM模块。

We had to add multiverse to our /etc/apt/sources.list:

我们必须在/etc/apt/sources.list添加multiverse

deb http://archive.ubuntu.com/ubuntu xenial main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu xenial-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu xenial-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner

Then we did sudo apt-get updateand installed libapache2-mod-fastcgi and php-fpm:

然后我们做了sudo apt-get update并安装了libapache2-mod-fastcgi和php-fpm:

sudo apt-get install libapache2-mod-fastcgi php7.0-fpm

Since php-fpm is a service separate from Apache, it needed a restart:

由于php-fpm是与Apache分开的服务,因此需要重新启动:

sudo service start php7.0-fpm

Then we disabled the prefork module, and enabled the event mode and proxy_fcgi:

然后,我们禁用prefork模块,并启用事件模式和proxy_fcgi:

sudo a2dismod php7.0 mpm_prefork
sudo a2enmod mpm_event proxy_fcgi

We added this snippet to our Apache virtual host:

我们将此片段添加到我们的Apache虚拟主机中:

<filesmatch "\.php$">
    SetHandler "proxy:fcgi://127.0.0.1:9000/"
</filesmatch>

This port needs to be consistent with php-fpm configuration in /etc/php/7.0/fpm/pool.d/www.conf. More about the php-fpm setup here.

此端口必须与/etc/php/7.0/fpm/pool.d/www.conf php-fpm配置一致。 更多关于PHP-FPM设置在这里 。

Then we tuned the mpm_event configuration in /etc/apache2/mods-available/mpm_event.conf, keeping in mind that our mini-VPS resources for this test were constrained — so we merely reduced some default numbers. Details about every directive on Apache’s website, and tips specific to the event mpm here. Keep in mind that started servers consume an amount of memory regardless of how busy they are. The MaxRequestWorkers directive sets the limit on the number of simultaneous requests allowed: setting MaxConnectionsPerChild to a value other than zero is important, because it prevents a possible memory leak.

然后,我们在/etc/apache2/mods-available/mpm_event.conf调整了mpm_event的配置, /etc/apache2/mods-available/mpm_event.conf记住,此测试的mini-VPS资源受到限制-因此,我们仅减少了一些默认值。 有关Apache 网站上每个指令的详细信息,以及此处的事件mpm的特定提示。 请记住,无论服务器有多忙,它们都会占用大量内存。 MaxRequestWorkers指令设置了对允许的同时请求数的限制:将MaxConnectionsPerChild设置为非零的值很重要,因为它可以防止可能的内存泄漏。


        StartServers              1
        MinSpareThreads          30
        MaxSpareThreads          75
        ThreadLimit              64
        ThreadsPerChild          30
        MaxRequestWorkers        80
        MaxConnectionsPerChild   80

Then we restarted the server with sudo service apache2 restart (if we change some directives, like ThreadLimit, we will need to stop and start the service explicitly, with sudo service apache2 stop; sudo service apache2 start).

然后,我们使用sudo service apache2 restart重新启动服务器(如果我们更改了某些指令,例如ThreadLimit,我们将需要停止并显式启动服务,其中sudo service apache2 stop; sudo service apache2 start )。

Our tests on Pingdom now showed page load time reduced by more than half:

我们在Pingdom上的测试现在显示页面加载时间减少了一半以上:

nginx apache_Apache vs Nginx性能:优化技术_第7张图片

调优Apache的其他技巧: (Other tips for tuning Apache:)

Disabling .htaccess: htaccess allows setting specific configuration for every single directory in our server root, without restarting. So, traversing all the directories, looking for the .htaccess files, on every request, incurs a performance penalty.

禁用.htaccess :htaccess允许为服务器根目录中的每个目录设置特定的配置,而无需重新启动。 因此,遍历所有目录,在每个请求中查找.htaccess文件,都会导致性能下降。

Quote from the Apache docs:

引用Apache文档:

In general, you should only use .htaccess files when you don’t have access to the main server configuration file.* … in general, use of .htaccess files should be avoided when possible. Any configuration that you would consider putting in a .htaccess file, can just as effectively be made in a section in your main server configuration file.*

通常,仅在无权访问主服务器配置文件时才应使用.htaccess文件。*…通常,应尽可能避免使用.htaccess文件。 您可以考虑在.htaccess文件中进行的任何配置,都可以在主服务器配置文件的部分中有效地进行。*

The solution is to disable it in /etc/apache2/apache2.conf:

解决方案是在/etc/apache2/apache2.conf中将其禁用:

AllowOverride None

If we need it for the specific directories, we can then enable it within sections in our virtual host files:

如果需要用于特定目录,则可以在以下位置启用它 虚拟主机文件中的部分:

AllowOverride All

Further tips include:

其他提示包括:

  • Control the browser cache with mod_expires — by setting the expires headers.

    使用mod_expires控制浏览器缓存 -通过设置expires标头。

  • Keep HostNameLookups turned off — HostNameLookups Off is the default since Apache 1.3, but make sure it stays off, because it can incur a performance penalty.

    保持HostNameLookups关闭- HostNameLookups Off是因为Apache 1.3的默认值,但要确保它保持关闭,因为它可以导致性能下降。

  • Apache2buddy is a simple script that we can run and get tips for tuning our system: curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl

    Apache2buddy是一个简单的脚本,我们可以运行该脚本并获得调整系统的提示: curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl curl -sL https://raw.githubusercontent.com/richardforth/apache2buddy/master/apache2buddy.pl | perl

nginx apache_Apache vs Nginx性能:优化技术_第8张图片

Nginx的 (Nginx)

Nginx is an event-driven and non-blocking web server. To quote one poster on Hacker News,

Nginx是事件驱动的非阻塞Web服务器。 要在Hacker News上引用一张海报,

Forking processes is incredibly expensive compared to an event loop. Event-based HTTP servers inevitably won.

与事件循环相比,分叉过程的成本非常高。 基于事件的HTTP服务器不可避免地获得了胜利。

This statement sparked quite a debate on Hacker News, but from our experience, just switching from a mpm_prefork Apache to Nginx can often mean saving the website from crashing. Simple switching to Nginx is very often a cure in itself.

该声明引发了有关Hacker News的广泛争论,但是根据我们的经验,仅从mpm_prefork Apache切换到Nginx常常意味着可以避免网站崩溃。 简单地切换到Nginx本身通常是一种治愈方法。

nginx apache_Apache vs Nginx性能:优化技术_第9张图片

A more thorough visual explanation of Nginx architecture can be found here.

在这里可以找到Nginx架构的更详尽的视觉说明。

Nginx设置 (Nginx settings)

Nginx recommends pinning the number of workers to number of PC cores (just like we did with Apache’s mpm_event configuration), by setting worker_processes to auto (default is 1) in /etc/nginx/nginx.conf.

Nginx的建议钉扎的工人数量,以PC内核(就像我们做了与Apache的配置mpm_event)的数量,通过设置worker_processesauto (缺省为1) /etc/nginx/nginx.conf

worker_connections sets the number of connections every worker process can handle. The default is 512, but it can usually be increased.

worker_connections设置每个工作进程可以处理的连接数。 默认值为512,但通常可以增加。

Keepalive connections are a server aspect that impacts performance, which isn’t usually visible in benchmarks.

保持活动连接是影响性能的服务器方面, 通常在基准测试中不可见 。

nginx apache_Apache vs Nginx性能:优化技术_第10张图片

According to the Nginx website,

根据Nginx网站 ,

HTTP keepalive connections are a necessary performance feature that reduce latency and allow web pages to load faster.

HTTP保持连接是一项必要的性能功能,可减少延迟并允许网页加载更快。

Establishing new TCP connections can be costly — not to mention when there is HTTPS encryption involved. The HTTP/2 protocol mitigates this with its multiplexing features. Reusing an existing connection can reduce request times.

建立新的TCP连接可能会很昂贵 -更不用说涉及HTTPS加密了。 HTTP / 2协议通过其多路复用功能缓解了这种情况。 重用现有连接可以减少请求时间。

Apache’s mpm_prefork and mpm_worker suffer from concurrency limitations that contrast the keepalive event loop. This is somewhat fixed in Apache 2.4, in mpm_event module, and comes as the only, default mode of operation in Nginx. Nginx workers can handle thousands of incoming connections simultaneously, and if it’s used as a reverse proxy or a load balancer, Nginx then uses a local pool of keepalive connections, without TCP connection overhead.

Apache的mpm_prefork和mpm_worker遭受并发限制,这些限制与keepalive事件循环形成对比。 这在Apache 2.4的mpm_event模块中已得到一定修复,并且是Nginx中唯一的默认操作模式。 Nginx工作者可以同时处理数千个传入连接,如果将其用作反向代理或负载平衡器,则Nginx可以使用本地的keepalive连接池,而不会产生TCP连接开销。

keepalive_requests is a setting that regulates the number of requests a client can make over a single keepalive connection.keepalive_timeout sets the time an idle keepalive connection stays open.

keepalive_requests是一项设置,用于调节客户端可以通过单个keepalive连接进行的请求数。 keepalive_timeout设置空闲的keepalive连接保持打开状态的时间。

keepalive is a setting pertaining to an Nginx connection to an upstream server — when it acts as a proxy or load balancer. This means the number of idle keepalive upstream connections per worker process.

keepalive是与Nginx与上游服务器的连接有关的设置-当它充当代理或负载平衡器时。 这意味着每个工作进程空闲的keepalive上游连接数。

Enabling the upstream keepalive connections requires putting these directives into the Nginx main configuration:

启用上游keepalive连接需要将以下指令放入Nginx主配置中:

proxy_http_version 1.1;
proxy_set_header Connection "";

Nginx upstream connections are managed by ngx_http_upstream_module.

Nginx上游连接由ngx_http_upstream_module管理。

If our front-end application keeps polling our back-end application for updates, increasing the keepalive_requests and keepalive_timeout will limit the number of connections that need to be established. The keepalive directive shouldn’t be too large, to allow for other connections to reach our upstream server.

如果前端应用程序不断轮询后端应用程序以获取更新,则增加keepalive_requestskeepalive_timeout将限制需要建立的连接数。 keepalive指令不应太大,以允许其他连接到达我们的上游服务器。

The tuning of these settings is done on a per-case basis, and needs to be tested. That is maybe one reason why keepalive doesn’t have a default setting.

这些设置的调整是根据具体情况进行的,需要进行测试。 这也许就是keepalive没有默认设置的原因之一。

使用Unix套接字 (Using unix sockets)

By default, Nginx uses a separate PHP process to which it forwards PHP file requests. In this, it acts as a proxy (just like Apache when we set it up with php7.0-fpm).

默认情况下,Nginx使用一个单独PHP进程,它将PHP文件请求转发到该进程。 在这种情况下,它充当代理(就像我们用php7.0-fpm设置Apache时一样)。

Often our virtual host setup with Nginx will look like this:

通常,我们使用Nginx进行的虚拟主机设置如下所示:

location ~ \.php$ {
    fastcgi_param REQUEST_METHOD $request_method;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_pass 127.0.0.1:9000;
}

Since FastCGI is a different protocol from HTTP, the first two lines are forwarding some arguments and headers to php-fpm, while the third line specifies the way to proxy our request — over a local network socket.

由于FastCGI是与HTTP不同的协议,因此前两行将一些参数和标头转发到php-fpm,而第三行指定了通过本地网络套接字代理请求的方式。

This is practical for multi-server setups, since we can also specify remote servers to proxy requests to.

这对于多服务器设置非常实用,因为我们还可以指定用于代理请求的远程服务器。

But if we’re hosting our whole setup on a single system, we should use a Unix socket to connect to the listening php process:

但是,如果我们将整个安装程序托管在单个系统上,则应使用Unix套接字连接到侦听php进程:

fastcgi_pass unix:/var/run/php7.0-fpm.sock;

Unix sockets are considered to have better performance than TCP, and this setup is considered safer. You can find more details about this setup in this article by Rackspace.

Unix套接字被认为比TCP具有更好的性能 ,并且这种设置被认为更安全。 您可以在Rackspace的本文中找到有关此设置的更多详细信息。

This tip regarding Unix sockets is also applicable for Apache. More details here.

有关Unix套接字的技巧也适用于Apache。 更多细节在这里 。

gzip_static: the accepted wisdom around web server performance is to compress our static assets. This often means we’ll try to compromise, and try to compress only files that are above some threshold, because compressing resources on the fly, with every request, can be expensive. Nginx has a gzip_static directive that allows us to serve gzipped versions of files — with extension .gz — instead of regular resources:

gzip_static :关于Web服务器性能的公认观点是压缩我们的静态资产。 这通常意味着我们将尝试妥协,并尝试仅压缩超过阈值的文件,因为在每次请求时动态压缩资源可能会很昂贵。 Nginx有一个gzip_static指令,该指令使我们可以提供压缩文件版本(扩展名为.gz),而不是常规资源:

location /assets {
    gzip_static on;
}

This way, Nginx will try to serve style.css.gz instead of style.css (we need to take care of the gzipping ourselves, in this case).

这样,Nginx将尝试提供style.css.gz而不是style.css (在这种情况下,我们需要自己进行gzip处理)。

This way, the CPU cycles won’t be wasted through on-the-fly compression for every request.

这样,就不会为每个请求通过即时压缩浪费CPU周期。

用Nginx缓存 (Caching with Nginx)

The story about Nginx wouldn’t be complete without mentioning how to cache content. Nginx caching is so efficient that many sysadmins don’t think that separate layers for HTTP caching — like Varnish — make much sense. Perhaps it is less elaborate, but simplicity is a feature. Enabling caching with Nginx is rather simple.

如果不提到如何缓存内容,关于Nginx的故事就不会完整。 Nginx缓存是如此高效,以至于许多系统管理员都不认为HTTP缓存的单独层(例如Varnish)有意义。 也许它不那么详尽,但是简单是其特色 。 使用Nginx启用缓存非常简单。

proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
  inactive=60m;

This is a directive we place in our virtual host file, outside of the server block. The proxy_cache_path argument can be any path we want to store our cache. levels designates how many levels of directories Nginx should store cached content in. For performance reasons, two levels are usually okay. Recursing through the directories can be costly. The keys_zone argument is a name for a shared memory zone used for storing the cache keys, and 10m is room for those keys in memory (10MB is usually enough; this isn’t the room for actual cached content). max_size is optional, and sets the upper limit for the cached content — here 10GB. If this isn’t specified, it will take up all the available space. inactive specifies how long the content can stay in the cache without being requested, before it gets deleted by Nginx.

这是我们放置在server外部虚拟主机文件中的指令。 proxy_cache_path参数可以是我们要存储缓存的任何路径。 levels指定Nginx应该在其中存储缓存内容的目录级别。出于性能的考虑,通常可以使用两个级别。 遍历目录可能会很昂贵。 keys_zone参数是用于存储缓存密钥的共享内存区域的名称,而10m是内存中这些密钥的空间(通常10MB就足够了;这不是实际缓存内容的空间)。 max_size是可选的,它设置缓存内容的上限-在这里为10GB。 如果未指定,它将占用所有可用空间。 inactive指定内容在被Nginx删除之前可以不被请求保留在缓存中的时间。

Having set this up, we would add the following line, with the name of our memory zone to either server or location block:

设置好之后,我们将在serverlocation块中添加以下行以及内存区域的名称:

proxy_cache my_cache;

An extra layer of fault-tolerance with Nginx can be achieved by telling it to serve the items from cache when it encounters a server error on the origin, or the upstream server, or when the server is down:

Nginx可以通过告诉它在源服务器或上游服务器上遇到服务器错误,或者当服务器关闭时告诉缓存中的项目来提供额外的容错能力:

proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;

More details about the server or location block directives to further tune Nginx caching can be found here.

在此处可以找到有关serverlocation块指令以进一步优化Nginx缓存的更多详细信息。

proxy_cache_* directives are for static assets, but we usually want to cache the dynamic output of our web apps — whether it’s a CMS or something else. In this case, we’ll use the fastcgi_cache_* directive instead of proxy_cache_*:

proxy_cache_*指令用于静态资产,但是我们通常要缓存Web应用程序的动态输出-无论是CMS还是其他。 在这种情况下,我们将使用fastcgi_cache_*指令而不是proxy_cache_*

fastcgi_cache_path /var/run/nginx-cache levels=1:2 keys_zone=my_cache:10m inactive=60m;
 fastcgi_cache_key "$scheme$request_method$host$request_uri";
 fastcgi_cache_use_stale error timeout invalid_header http_500;
 fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
 add_header NGINX_FASTCGI_CACHE $upstream_cache_status;

The last line above will set response headers to inform us whether the content was delivered from the cache or not.

上面的最后一行将设置响应头,以通知我们是否从缓存中传递了内容。

Then, in our server or location block, we can set some exceptions to caching — for example, when the query string is present in the request URL:

然后,在服务器或位置块中,我们可以为缓存设置一些例外情况,例如,当请求URL中存在查询字符串时:

if ($query_string != "") {
    set $skip_cache 1;
}

Also, in our \.php block, inside server, in case of PHP, we would add something like:

同样,在我们server \.php块中,如果使用PHP,我们将添加以下内容:

location ~ \.php$ {
    try_files $uri =404;
    include fastcgi_params;

    fastcgi_read_timeout 360s;
    fastcgi_buffer_size 128k;
    fastcgi_buffers 4 256k;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

    fastcgi_pass unix:/run/php/php7.0-fpm.sock;

      fastcgi_index index.php;
      fastcgi_cache_bypass $skip_cache;
      fastcgi_no_cache $skip_cache;
      fastcgi_cache my_cache;
      fastcgi_cache_valid  60m;
}

Above, the fastcgi_cache* lines, and fastcgi_no_cache, regulate caching and exclusions. Detailed reference of all these directives can be found on the Nginx docs website.

上面的fastcgi_cache*行和fastcgi_no_cache调节缓存和排除。 所有这些指令的详细参考可以在Nginx docs网站上找到 。

To learn more, the people over at Nginx have provided a free webinar on this topic, and there’s a number of ebooks available.

要了解更多信息,Nginx的人们提供了有关该主题的免费网络研讨会 ,并且有许多电子书可供选择。

结论 (Conclusion)

We’ve tried to introduce some techniques that will help us improve our web server’s performance, and the theory behind those techniques. But this topic is in no way exhausted: we still haven’t covered reverse-proxy setups that consist of both Apache and Nginx, or multi-server setups. Achieving the top results with both these servers is a matter of testing and analyzing specific, real-life cases. It’s kind of a never-ending topic.

我们试图引入一些技术来帮助我们改善Web服务器的性能,以及这些技术背后的理论。 但是,本主题绝不会穷尽:我们仍然没有涵盖由Apache和Nginx组成的反向代理设置或多服务器设置。 通过这两种服务器获得最佳结果,只是测试和分析具体的实际案例。 这是一个永无止境的话题。

翻译自: https://www.sitepoint.com/apache-vs-nginx-performance-optimization-techniques/

nginx apache

你可能感兴趣的:(网络,python,nginx,linux,java)