Docker容器的运行时性能成本是多少?

本文翻译自:What is the runtime performance cost of a Docker container?

I'd like to comprehensively understand the run-time performance cost of a Docker container. 我想全面了解Docker容器的运行时性能开销。 I've found references to networking anecdotally being ~100µs slower . 我发现网络的参考传闻速度慢了~100μs 。

I've also found references to the run-time cost being "negligible" and "close to zero" but I'd like to know more precisely what those costs are. 我还发现运行时成本的参考值“可以忽略不计”和“接近于零”,但我想更准确地了解这些成本是多少。 Ideally I'd like to know what Docker is abstracting with a performance cost and things that are abstracted without a performance cost. 理想情况下,我想知道Docker正在以性能成本和抽象的东西进行抽象而没有性能成本。 Networking, CPU, memory, etc. 网络,CPU,内存等

Furthermore, if there are abstraction costs, are there ways to get around the abstraction cost. 此外,如果存在抽象成本,是否有办法绕过抽象成本。 For example, perhaps I can mount a disk directly vs. virtually in Docker. 例如,也许我可以在Docker中直接安装磁盘。


#1楼

参考:https://stackoom.com/question/1TqLF/Docker容器的运行时性能成本是多少


#2楼

Docker isn't virtualization, as such -- instead, it's an abstraction on top of the kernel's support for different process namespaces, device namespaces, etc.; Docker不是虚拟化的 - 相反,它是内核对不同进程命名空间,设备命名空间等的支持的抽象; one namespace isn't inherently more expensive or inefficient than another, so what actually makes Docker have a performance impact is a matter of what's actually in those namespaces. 一个名称空间本身并不比另一个名称空间更昂贵或效率低,所以实际上使Docker产生性能影响的是这些名称空间中实际存在的问题。


Docker's choices in terms of how it configures namespaces for its containers have costs, but those costs are all directly associated with benefits -- you can give them up, but in doing so you also give up the associated benefit: Docker在如何为其容器配置命名空间方面的选择有成本,但这些成本都与好处直接相关 - 您可以放弃它们,但这样做也会放弃相关的好处:

  • Layered filesystems are expensive -- exactly what the costs are vary with each one (and Docker supports multiple backends), and with your usage patterns (merging multiple large directories, or merging a very deep set of filesystems will be particularly expensive), but they're not free. 分层文件系统很昂贵 - 确切地说每个文件的成本各不相同(而Docker支持多个后端),并且使用您的使用模式(合并多个大型目录,或合并一组非常深的文件系统将特别昂贵),但它们'不自由。 On the other hand, a great deal of Docker's functionality -- being able to build guests off other guests in a copy-on-write manner, and getting the storage advantages implicit in same -- ride on paying this cost. 另一方面,Docker的大量功能 - 能够以写入时复制的方式将客人从其他客户身上建立起来,并且隐藏着相同的存储优势 - 可以支付这笔费用。
  • DNAT gets expensive at scale -- but gives you the benefit of being able to configure your guest's networking independently of your host's and have a convenient interface for forwarding only the ports you want between them. DNAT在规模上变得非常昂贵 - 但是您可以独立于主机配置访客的网络,并且只有一个方便的界面来转发您想要的端口。 You can replace this with a bridge to a physical interface, but again, lose the benefit. 您可以将其替换为物理接口的桥接器,但同样会失去优势。
  • Being able to run each software stack with its dependencies installed in the most convenient manner -- independent of the host's distro, libc, and other library versions -- is a great benefit, but needing to load shared libraries more than once (when their versions differ) has the cost you'd expect. 能够以最方便的方式运行每个软件堆栈及其依赖项 - 独立于主机的发行版,libc和其他库版本 - 是一个很大的好处,但需要多次加载共享库(当它们的版本时)不同的)有你期望的成本。

And so forth. 等等。 How much these costs actually impact you in your environment -- with your network access patterns, your memory constraints, etc -- is an item for which it's difficult to provide a generic answer. 这些成本在您的环境中实际影响了多少 - 使用您的网络访问模式,内存限制等 - 是一个难以提供通用答案的项目。


#3楼

Here is an excellent 2014 IBM research paper titled "An Updated Performance Comparison of Virtual Machines and Linux Containers" by Felter et al. 这是一篇出色的2014年IBM研究论文,题为“虚拟机和Linux容器的更新性能比较”,作者是Felter等人。 that provides a comparison between bare metal, KVM, and Docker containers. 它提供了裸机,KVM和Docker容器之间的比较。 The general result is that Docker is nearly identical to Native performance and faster than KVM in every category. 一般结果是Docker几乎与Native性能完全相同,并且在每个类别中都比KVM快。

The exception to this is Docker's NAT - if you use port mapping (eg docker run -p 8080:8080 ) then you can expect a minor hit in latency, as shown below. 例外的是Docker的NAT - 如果您使用端口映射(例如docker run -p 8080:8080 ),那么您可以预期延迟会受到轻微影响,如下所示。 However, you can now use the host network stack (eg docker run --net=host ) when launching a Docker container, which will perform identically to the Native column (as shown in the Redis latency results lower down). 但是,您现在可以在启动Docker容器时使用主机网络堆栈(例如docker run --net=host ),该容器将与Native列完全相同(如Redis延迟结果中所示)。

They also ran latency tests on a few specific services, such as Redis. 他们还对一些特定服务(如Redis)进行延迟测试。 You can see that above 20 client threads, highest latency overhead goes Docker NAT, then KVM, then a rough tie between Docker host/native. 您可以看到,20个以上的客户端线程,最高的延迟开销是Docker NAT,然后是KVM,然后是Docker主机/本机之间的粗略关系。

Just because it's a really useful paper, here are some other figures. 仅仅因为它是一篇非常有用的论文,这里有一些其他的数字。 Please download it for full access. 请下载它以获得完全访问权限。

Taking a look at Disk IO: 看看磁盘IO:

Now looking at CPU overhead: 现在看看CPU开销:

Now some examples of memory (read the paper for details, memory can be extra tricky) 现在一些内存的例子(详细阅读论文,内存可能会特别棘手)


#4楼

Here's some more benchmarks for Docker based memcached server versus host native memcached server using Twemperf benchmark tool https://github.com/twitter/twemperf with 5000 connections and 20k connection rate 这里的一些基准为Docker based memcached serverhost native memcached server使用Twemperf基准测试工具https://github.com/twitter/twemperf与5000个连接和20K连接速率

Connect time overhead for docker based memcached seems to agree with above whitepaper at roughly twice native speed. 基于docker的memcached的连接时间开销似乎与上述白皮书大致相当于原生速度的两倍。

Twemperf Docker Memcached

Connection rate: 9817.9 conn/s
Connection time [ms]: avg 341.1 min 73.7 max 396.2 stddev 52.11
Connect time [ms]: avg 55.0 min 1.1 max 103.1 stddev 28.14
Request rate: 83942.7 req/s (0.0 ms/req)
Request size [B]: avg 129.0 min 129.0 max 129.0 stddev 0.00
Response rate: 83942.7 rsp/s (0.0 ms/rsp)
Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00
Response time [ms]: avg 28.6 min 1.2 max 65.0 stddev 0.01
Response time [ms]: p25 24.0 p50 27.0 p75 29.0
Response time [ms]: p95 58.0 p99 62.0 p999 65.0

Twemperf Centmin Mod Memcached

Connection rate: 11419.3 conn/s
Connection time [ms]: avg 200.5 min 0.6 max 263.2 stddev 73.85
Connect time [ms]: avg 26.2 min 0.0 max 53.5 stddev 14.59
Request rate: 114192.6 req/s (0.0 ms/req)
Request size [B]: avg 129.0 min 129.0 max 129.0 stddev 0.00
Response rate: 114192.6 rsp/s (0.0 ms/rsp)
Response size [B]: avg 8.0 min 8.0 max 8.0 stddev 0.00
Response time [ms]: avg 17.4 min 0.0 max 28.8 stddev 0.01
Response time [ms]: p25 12.0 p50 20.0 p75 23.0
Response time [ms]: p95 28.0 p99 28.0 p999 29.0

Here's bencmarks using memtier benchmark tool 这是使用memtier基准工具的bencmarks

memtier_benchmark docker Memcached

4         Threads
50        Connections per thread
10000     Requests per thread
Type        Ops/sec     Hits/sec   Misses/sec      Latency       KB/sec
------------------------------------------------------------------------
Sets       16821.99          ---          ---      1.12600      2271.79
Gets      168035.07    159636.00      8399.07      1.12000     23884.00
Totals    184857.06    159636.00      8399.07      1.12100     26155.79

memtier_benchmark Centmin Mod Memcached

4         Threads
50        Connections per thread
10000     Requests per thread
Type        Ops/sec     Hits/sec   Misses/sec      Latency       KB/sec
------------------------------------------------------------------------
Sets       28468.13          ---          ---      0.62300      3844.59
Gets      284368.51    266547.14     17821.36      0.62200     39964.31
Totals    312836.64    266547.14     17821.36      0.62200     43808.90

你可能感兴趣的:(performance,docker)