第4章.计算节点

 

Compute nodes form the resource core of the OpenStack Compute cloud, providing the processing, memory, network and storage resources to run instances.

计算节点组成OpenStack计算云资源的核心,为运行实例提供计算,内存,网络和存储资源。

CPU Choice

The type of CPU in your compute node is a very important choice. First, ensure the CPU supports virtualization by way of VT-x for Intel chips andAMD-v for AMD chips.

计算节点CPU的类型是一个非常重要的选择。首先,保证CPU支持英特尔芯片VT-X和AMD芯片AMD-V的虚拟化技术。

The number of cores that the CPU has also affects the decision. It's common for current CPUs to have up to 12 cores. Additionally, if the CPU supports Hyper-threading, those 12 cores are doubled to 24 cores. If you purchase a server that supports multiple CPUs, the number of cores is further multiplied.

CPU具有的核心数也是很重要的。目前CPU,通常拥有多达12个核心。此外,如果CPU支持超线程,12个核心意味着24个核心。如果您购买了支持多个CPU的服务器,核心数将成倍增长。

Whether you should enable hyper-threading on your CPUs depends upon your use case. We recommend you do performance testing with your local workload with both hyper-threading on and off to determine what is more appropriate in your case.

CPU上是否应该启用超线程取决于使用情况。建议同时开启和关闭超线程的情况下做性能测试用以确定那种选择更适合实际的工作负载。

Hypervisor Choice

OpenStack Compute supports many hypervisors to various degrees, including KVM, LXC, QEMU, UML, VMWare ESX/ESXi, Xen, PowerVM, Hyper-V.

OpenStack计算支持许多不同的虚拟化技术,包括KVM,LXC,QEMU,UML, VMWare ESX / ESXi,Xen,PowerVM,Hyper-V。

Probably the most important factor in your choice of hypervisor is your current usage or experience. Aside from that, there are practical concerns to do with feature parity, documentation, and the level of community experience.

也许选择的虚拟机管理程序的最重要的因素是您当前使用经验。除此之外,特别的功能,文档与社区经验也是需要考虑的。

For example, KVM is the most widely adopted hypervisor in the OpenStack community. Besides KVM, more deployments exist running Xen, LXC, VMWare and Hyper-V than the others listed — however, each of these are lacking some feature support or the documentation on how to use them with OpenStack is out of date.

例如,OpenStack社区中KVM是最广泛采用的的虚拟机管理程序。除了KVM,就是部署Xen,LXC,VMware和Hyper-V,然而,这些都缺乏一些功能支持或文档中使它们在OpenStack使用会觉得过时。

The best information available to support your choice is found on the Hypervisor Support Matrix(https://wiki.openstack.org/wiki/HypervisorSupportMatrix), and in the reference manual (http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_hypervisors.html)

选择虚拟机管理程序最佳信息请参考(https://wiki.openstack.org/wiki/HypervisorSupportMatrix),并参考手册(http://docs.openstack.org/folsom/openstack -compute/admin/content/ch_hypervisors.html)。.

clip_image001

Note

It is also possible to run multiple hypervisors in a single deployment using Host Aggregates or Cells. However, an individual compute node can only run a single hypervisor at a time

单个部署中也可以使用Host Aggregates或Cells运行多个虚拟机管理程序。然而,一个单独的计算节点只能运行单个虚拟机管理程序。.

Instance Storage Solutions

Off Compute Node Storage – Shared File System

On Compute Node Storage – Shared File System

On Compute Node Storage – Non-shared File System

Issues with Live Migration

Choice of File System

As part of the procurement for a compute cluster, you must specify some storage for the disk on which the instantiated instance runs. There are three main approaches to providing this temporary-style storage, and it is important to understand the implications of the choice.

作为采购计算群集的一部分,你必须指定实例运行使用的磁盘存储。主要有三种方法提供temporary-style的存储,理解其含义并作出选择是很重要的。

They are:

  • Off compute node storage – shared file system
  • 非计算节点存储 - 共享文件系统
  • On compute node storage – shared file system
  • 计算节点的存储 - 共享文件系统
  • On compute node storage – non-shared file system
  • 在计算节点存储 - 非共享文件系统

In general, the questions you should be asking when selecting the storage are as follows:

通常,应该通过回答以下问题来选择合适的存储类型:

  • What is the platter count you can achieve?
  • 可以实现的多大的盘片计数?
  • Do more spindles result in better I/O despite network access?
  • 除开网络访问最大的I / O?
  • Which one results in the best cost-performance scenario you're aiming for?
  • 符合目的场景下哪一个最佳的成本性能选择?
  • How do you manage the storage operationally?
  • 如何管理存储操作?

Off Compute Node Storage – Shared File System

Many operators use separate compute and storage hosts. Compute services and storage services have different requirements, compute hosts typically require more CPU and RAM than storage hosts. Therefore, for a fixed budget, it makes sense to have different configurations for your compute nodes and your storage nodes with compute nodes invested in CPU and RAM, and storage nodes invested in block storage.

许多运营商使用独立计算和存储主机。计算服务和存储服务有不同的要求,同存储主机相比计算主机通常需要更多的CPU和RAM。因此,对于确定的预算,计算节点和存储节点在CPU与RAM,块存储方面的花费有不同的配置是有道理的。

Also, if you use separate compute and storage hosts then you can treat your compute hosts as "stateless". This simplifies maintenance for the compute hosts. As long as you don't have any instances currently running on a compute host, you can take it offline or wipe it completely without having any effect on the rest of your cloud.

此外,如果使用单独的计算和存储主机,可以把计算主机 “stateless”化。这可简化计算主机的维护。只要当前没有任何实例在计算主机上运行,你可以将其离线或移除而完全不会任何影响云中其余的主机。

However, if you are more restricted in the number of physical hosts you have available for creating your cloud and you want to be able to dedicate as many of your hosts as possible to running instances, it makes sense to run compute and storage on the same machines.

但是,如果想让主机尽可以多的创建运行更多的实例,计算和存储在同一机器也是有道理的。

In this option, the disks storing the running instances are hosted in servers outside of the compute nodes. There are also several advantages to this approach:

在这个选项中,正在运行的实例的磁盘存储托管在计算节点以外的服务器。这种方法也有一些优势:

  • If a compute node fails, instances are usually easily recoverable.
  • 如果一个计算节点失败,实例通常是很容易恢复。
  • Running a dedicated storage system can be operationally simpler.
  • 运行一个专用的存储系统操作更简单。
  • Being able to scale to any number of spindles.
  • 能够扩展到任意数量的主轴。
  • It may be possible to share the external storage for other purposes.
  • 可能是可以共享用于其他目的的外部存储。

The main downsides to this approach are:

这种方法的主要缺点是:

  • Depending on design, heavy I/O usage from some instances can affect unrelated instances.
  • 这种设计对于高I / O实例使用在某些情况下会影响与之无关的实例。
  • Use of the network can decrease performance.
  • 使用的网络会降低性能。

On Compute Node Storage – Shared File System

In this option, each nova-compute node is specified with a significant amount of disks, but a distributed file system ties the disks from each compute node into a single mount. The main advantage of this option is that it scales to external storage when you require additional storage.

在此选项中,每个nova-compute节点具有较多的磁盘数量,分布式文件系统将每个计算节点的磁盘联接成一个统一的存储系统。这个选项的主要优点是,当你需要额外的存储空间很容易扩展。

However, this option has several downsides:

但是,此选项有以下几个缺点:

  • Running a distributed file system can make you lose your data locality compared with non-shared storage.
  • 运行一个分布式文件系统,同非共享存储相比会让你失去你的本地数据局部性。
  • Recovery of instances is complicated by depending on multiple hosts.
  • 实例恢复依靠多个主机上,非常复杂。
  • The chassis size of the compute node can limit the number of spindles able to be used in a compute node.
  • 机箱尺寸计算节点可​​能限制能够被用于计算节点的数量。
  • Use of the network can decrease performance.
  • 使用的网络会降低性能。

On Compute Node Storage – Non-shared File System

In this option, each nova-compute node is specified with enough disks to store the instances it hosts. There are two main reasons why this is a good idea:

在此选项中,每个nova-compute节点有足够的磁盘存储它承载的实例。这是一个好主意,主要有两个原因:

  • Heavy I/O usage on one compute node does not affect instances on other compute nodes.
  • 大量I / O在一个计算节点上,不影响其他计算节点的实例。
  • Direct I/O access can increase performance.
  • 直接I / O访问可以提高性能。

This has several downsides:

这有以下几个缺点:

  • If a compute node fails, the instances running on that node are lost.
  • 如果一个计算节点出现故障时,该节点上运行的实例都将丢失。
  • The chassis size of the compute node can limit the number of spindles able to be used in a compute node.
  • 计算节点机箱尺寸限制能够被用于计算节点的磁盘数量。
  • Migrations of instances from one node to another are more complicated, and rely on features which may not continue to be developed.
  • 从一个节点到另一个节点的实例迁移更为复杂,相关的功能可能不会被继续开发。
  • If additional storage is required, this option does not to scale.
  • 如果需要额外的存储空间,这个选项无法扩展。

Issues with Live Migration

We consider live migration an integral part of the operations of the cloud. This feature provides the ability to seamlessly move instances from one physical host to another, a necessity for performing upgrades that require reboots of the compute hosts, but only works well with shared storage.

我们认为实时(热)迁移是云服务的一个组成部分。它提供了将实例从一个物理主机无缝地迁移到另一个物理主机功能,这在执行升级,需要重新启动计算主机时是必要,但这只有在使用共享存储的情况下才有效。

Theoretically live migration can be done with non-shared storage, using a feature known as KVM live block migration. However, this is a little-known feature in OpenStack, with limited testing when compared to live migration, and is slated for deprecation in KVM upstream.

理论上非共享存储可以做到实时迁移,使用的功能被称为KVM live block迁移。然而,这是OpenStack的一个鲜为人知的功能,与实时迁移相比它的测试时有限,KVM也不赞成这样做。

Choice of File System

If you want to support shared storage live migration, you'll need to configure a distributed file system.

Possible options include:

如果想使用支持共享存储的实时迁移,需要配置一个分布式文件系统。可能的选择包括:

  • NFS (default for Linux)
  • GlusterFS
  • MooseFS
  • Lustre

We've seen deployments with all, and recommend you choose the one you are most familiar with operating.

我们已经看到了所有的部署,建议你选择一个你最熟悉的进行部署。

Overcommitting

OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances you can have running on your cloud, at the cost of reducing the performance of the instances. OpenStack Compute uses the following ratios by default:

OpenStack允许计算节点上超量使用CPU和RAM。在成本相同降低实例性能的情况下,这允许在云上运行更多的实例。 OpenStack计算比默认情况下,使用以下:

  • CPU allocation ratio: 16
  • RAM allocation ratio: 1.5

The default CPU allocation ratio of 16 means that the scheduler allocates up to 16 virtual cores on a node per physical cores. For example, if a physical node has 12 cores, and each virtual machine instance uses 4 virtual cores, the scheduler allocates up to 192 virtual cores to instances (such as, 48 instances, in the case where each instance has 4 virtual cores).

默认的CPU分配比为16意味着每个物理核心的节点上可调度分配多达16个虚拟内核。例如,如果一个物理节点有12个核心,每个虚拟机实例使用4个虚拟内核,可调度分配192个虚拟内核的实例(如,48个实例,每个实例都有4个虚拟内核的情况下)。

Similarly, the default RAM allocation ratio of 1.5 means that the scheduler allocates instances to a physical node as long as the total amount of RAM associated with the instances is less than 1.5 times the amount of RAM available on the physical node.

同样地,默认的的RAM分配比例1.5装置的调度器分配到物理节点只要与实例相关联的RAM的总量是小于1.5倍的物理节点上可用的RAM数量的实例。

For example, if a physical node has 48 GB of RAM, the scheduler allocates instances to that node until the sum of the RAM associated with the instances reaches 72 GB (such as nine instances, in the case where each instance has 8 GB of RAM).

例如,如果一个物理节点48 GB的RAM,调度器分配到该节点的实例,直到与实例相关的RAM的总和达到72 GB(如9个实例,每个实例有8 GB的RAM )。

You must select the appropriate CPU and RAM allocation ratio for your particular use case.

必须为特定使用情况选择合适的CPU和RAM的分配比例。

Logging

Logging is detailed more fully in the section called “Logging”. However it is an important design consideration to take into account before commencing operations of your cloud.

OpenStack produces a great deal of useful logging information, however, in order for it to be useful for operations purposes you should consider having a central logging server to send logs to, and a log parsing/analysis system (such as logstash).

日志有关更充分的描述在“日志”一节中有详细记录。然而,考虑云的运营,这个设计很重要。OpenStack里产生了大量的有用的日志信息,但是,为了出于业务需要并充分利用它,应该考虑发送日志到具有中央日志解析/分析系统服务器,(如logstash)。

Networking

Networking in OpenStack is a complex, multi-faceted challenge. See Chapter 6, Network Design.

OpenStack的网络是一个复杂的,多方位的挑战。请参阅第6章,网络设计。

你可能感兴趣的:(计算)