cinder的qos限速

作者:【吴业亮】

博客:https://wuyeliang.blog.csdn.net/

Cinder 支持 front-end 端和 back-end 端设置 QoS,其中 front-end 表示 hypervisor 端,即在宿主机上设置虚拟机的 QoS,通常使用 cgroup 或者 qemu-iothrottling;back-end 端指在存储设备上设置 QoS,该功能需要存储设备的支持。
Ceph RBD 不支持 QoS,故数据盘的 QoS 需要采用 qemu io throttling 在 front-end 端设置。以 Ceph-RBD 的存储为例,设置数据盘的 QoS 步骤如下:

参数解释:

total_bytes_sec: the total allowed bandwidth for the guest per second
read_bytes_sec: sequential read limitation
write_bytes_sec: sequential write limitation
total_iops_sec: the total allowed IOPS for the guest per second
read_iops_sec: random read limitation
write_iops_sec: random write limitation

设置qos

[root@node1 ~]# cinder qos-create ceph-ssd-qos consumer=front-end read_bytes_sec=50000000 write_bytes_sec=50000000 read_iops_sec=400 write_iops_sec=400
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| consumer | front-end                            |
| id       | fc315a12-d450-4a47-91fa-57792bb80932 |
| name     | ceph-ssd-qos                         |
| specs    | read_bytes_sec : 50000000            |
|          | read_iops_sec : 400                  |
|          | write_bytes_sec : 50000000           |
|          | write_iops_sec : 400                 |
+----------+--------------------------------------+

创建存储类型

[root@node1 ~]# cinder type-create ceph-storage
+--------------------------------------+--------------+-------------+-----------+
| ID                                   | Name         | Description | Is_Public |
+--------------------------------------+--------------+-------------+-----------+
| 4c9c7c5a-a15c-41ab-98c5-45cdea8c91cd | ceph-storage | -           | True      |
+--------------------------------------+--------------+-------------+-----------+

将存储类型和后端存储绑定

# cinder --os-username admin --os-tenant-name admin type-key ceph-storage set volume_backend_name=ceph

注意:后端存储类型名称查看

[root@node1 ~]# cat /etc/cinder/cinder.conf | grep volume_backend_name | grep -v ^#
volume_backend_name = ceph

查看卷类型


[root@node1 ~]# cinder type-list
+--------------------------------------+--------------+-------------+-----------+
| ID                                   | Name         | Description | Is_Public |
+--------------------------------------+--------------+-------------+-----------+
| 4c9c7c5a-a15c-41ab-98c5-45cdea8c91cd | ceph-storage | -           | True      |
+--------------------------------------+--------------+-------------+-----------+

查看QOS

[root@node1 ~]# cinder qos-list
+--------------------------------------+--------------+-----------+----------------------------------------------------------------------------------------------------------------+
| ID                                   | Name         | Consumer  | specs                                                                                                          |
+--------------------------------------+--------------+-----------+----------------------------------------------------------------------------------------------------------------+
| fc315a12-d450-4a47-91fa-57792bb80932 | ceph-ssd-qos | front-end | {'read_bytes_sec': '50000000', 'write_iops_sec': '400', 'write_bytes_sec': '50000000', 'read_iops_sec': '400'} |
+--------------------------------------+--------------+-----------+----------------------------------------------------------------------------------------------------------------+
[root@node1 ~]# 

将卷类型和qos绑定

格式: cinder qos-associate QOS_ID   TYPE_ID

[root@node1 ~]# cinder qos-associate fc315a12-d450-4a47-91fa-57792bb80932  4c9c7c5a-a15c-41ab-98c5-45cdea8c91cd

验证

创建卷并将卷绑定到虚拟机上

[root@node1 ~]# virsh dumpxml 3 | grep sec
        
        
        50000000
        50000000
        400
        400
  
  
  
[root@node1 ~]# 

参考:

https://docs.openstack.org/cinder/latest/admin/blockstorage-capacity-based-qos.html

你可能感兴趣的:(openstack)