OpenStack, Ceph RBD and QoS

OpenStack,Ceph RBD and QoS

时间 2013-12-2307:19:00  Planet OpenStack 原文http://www.sebastien-han.fr/blog/2013/12/23/openstack-ceph-rbd-and-qos/

The Havana cycle introduced a QoSfeature on both Cinder and Nova. Quick tour of this excellent implementation.

Originally both QEMU and KVM supportrate limitation. This is obviously implemented through libvirt and available asan extra xml flag within the <disk> section callediotune .

total_bytes_sec : the total allowed bandwidth for the guest per second

read_bytes_sec : sequential read limitation

write_bytes_sec : sequential write limitation

total_iops_sec : the total allowed IOPS for the guest per second

read_iops_sec : random read limitation

write_iops_sec : random write limitation

This is wonderful that OpenStackimplemented such (easy?) feature in both Nova and Cinder. It is also a signthat OpenStack is getting more featured and complete in the existing coreprojects. Having such facility is extremely useful for several reasons. Firstof all, not all the storage backends support QoS. For instance, Ceph doesn’thave any built-in QoS feature whatsoever. Moreover, the limitation is directlyat the hypervisor layer and your storage solution doesn’t even need to havesuch feature. Another good point is that from an operator side it is quite niceto be able to offer different levels of service. Operators can now offerdifferent types of volumes based on a certain QoS, customers then, will becharged accordingly.

II. Test it!

First create the QoS in Cinder:

1

2

3

4

5

6

7

8

9

$ cinder qos-create  high-iops consumer="front-end" read_iops_sec=2000 write_iops_sec=1000

+----------+---------------------------------------------------------+

| Property |                       Value                             |

+----------+---------------------------------------------------------+

| consumer |                     front-end                           |

|    id     |        c38d72f8-f4a4-4999-8acd-a17f34b040cb             |

|   name    |                high-iops                                |

|  specs    | {u'write_iops_sec': u'1000', u'read_iops_sec': u'2000'} |

+----------+---------------------------------------------------------+

Create a new volume type:

1

2

3

4

5

6

$ cinder type-create  high-iops

+--------------------------------------+-----------+

|                  ID                  | Name      |

+--------------------------------------+-----------+

|  9c746ca5-eff8-40fe-9a96-1cdef7173bd0 | high-iops |

+--------------------------------------+-----------+

Then associate the volume type with theQoS:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

$ cinder  qos-associate c38d72f8-f4a4-4999-8acd-a17f34b040cb  9c746ca5-eff8-40fe-9a96-1cdef7173bd0


$ cinder create  --display-name slow --volume-type slow 1

+---------------------+--------------------------------------+

|       Property      |                Value                 |

+---------------------+--------------------------------------+

|     attachments     |                  []                  |

|  availability_zone  |                 nova                 |

|       bootable      |                false                 |

|      created_at     |       2013-12-02T12:59:33.177875      |

| display_description  |                 None                 |

|     display_name    |                 high-iop             |

|          id          | 743549c1-c7a3-4e86-8e99-b51df4cf7cdc |

|       metadata      |                  {}                  |

|         size        |                  1                   |

|     snapshot_id     |                 None                 |

|     source_volid    |                 None                 |

|        status       |               creating               |

|     volume_type     |                 high-iop             |

+---------------------+--------------------------------------+

Eventually attach the volume to aninstance:

1

2

3

4

5

6

7

8

9

$ nova volume-attach  cirrOS 743549c1-c7a3-4e86-8e99-b51df4cf7cdc /dev/vdc

+----------+--------------------------------------+

| Property |  Value                                |

+----------+--------------------------------------+

| device   | /dev/vdc                             |

| serverId |  7fff1d37-efc4-46b9-8681-3e6b1086c453 |

| id       | 743549c1-c7a3-4e86-8e99-b51df4cf7cdc  |

| volumeId |  743549c1-c7a3-4e86-8e99-b51df4cf7cdc |

+----------+--------------------------------------+

While attaching the device you shouldsee the following xml creation from the nova-volume debug log. Dumping thevirsh xml works as well.

2013-12-11 14:12:05.874 DEBUGnova.virt.libvirt.config [req-232cf5eb-a79b-42d5-a183-2f4758e8d8eb admin admin]Generated XML <disktype="network"device="disk">

<drivername="qemu"type="raw"cache="none"/>

<sourceprotocol="rbd"name="volumes/volume-2e589abc-a008-4433-89ae-1bb142b139e3">

<hostname="192.168.251.100"port="6790"/>

</source>

<authusername="volumes">

<secrettype="ceph"uuid="95c98032-ad65-5db8-f5d3-5bd09cd563ef"/>

</auth>

<targetbus="virtio"dev="vdc"/>

<serial>2e589abc-a008-4433-89ae-1bb142b139e3</serial>

<iotune>

<read_iops_sec>20</read_iops_sec>

<write_iops_sec>5</write_iops_sec>

</iotune>

</disk>

W Important note: rate-limitingis currently broken in Havana, however the bughas already beenreported and a fixsubmitted/accepted . This same patch has also already beenproposed as a potentialbackport for Havana.



你可能感兴趣的:(openstack,cinder,qos)