CEPH 不同类型磁盘 加入不同pool

版本:(L版之前 需要手动修改CRUSH map)
# ceph --version
ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)

因是测试环境 只在虚拟机上加了两块磁盘做OSD  将class修改了 一块为hdd  一块为ssd
#ceph osd tree
ID CLASS WEIGHT  TYPE NAME              STATUS REWEIGHT PRI-AFF 
-1       0.03998 root default                                   
-2             0     host 192.168.60.99                         
-5       0.03998     host control01                             
 1   hdd 0.01999         osd.1              up  1.00000 1.00000 
 0   ssd 0.01999         osd.0              up  1.00000 1.00000 


# ceph osd crush class ls
[
    "ssd",
    "hdd"
]


创建两条rule host分别指定为ssd/hdd
# ceph osd crush rule create-replicated rule-ssd default  host ssd
# ceph osd crush rule create-replicated rule-hdd default  host hdd


修改pool绑定的rule
# ceph osd pool set images crush_rule rule-hdd
# ceph osd pool set backups crush_rule rule-hdd
# ceph osd pool set volumes crush_rule rule-ssd
# ceph osd pool set vms crush_rule rule-ssd

查看绑定后信息
# ceph osd pool ls detail 
pool 1 'images' replicated size 1 min_size 1 crush_rule 3 object_hash rjenkins pg_num 128 pgp_num 128 last_change 60 flags hashpspool stripe_width 0 application rbd
	removed_snaps [1~3]
pool 2 'volumes' replicated size 1 min_size 1 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 64 flags hashpspool stripe_width 0 application rbd
	removed_snaps [1~3]
pool 3 'backups' replicated size 1 min_size 1 crush_rule 3 object_hash rjenkins pg_num 128 pgp_num 128 last_change 62 flags hashpspool stripe_width 0 application rbd
pool 4 'vms' replicated size 1 min_size 1 crush_rule 2 object_hash rjenkins pg_num 128 pgp_num 128 last_change 66 flags hashpspool stripe_width 0 application rbd
	removed_snaps [1~3]




上传一个镜像文件到images pool中 测试是不是到指定的OSD

# glance image-show 1cf5aa5a-cc08-45cf-9d1e-5d9b77cd4e63
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | ecc9a5132e7a0f11a4c585f513cd0873                                                 |
| container_format | bare                                                                             |
| created_at       | 2019-05-16T17:13:42Z                                                             |
| direct_url       | rbd://20660ae0-3a39-40b4-9cbe-97cc046cbe40/images/1cf5aa5a-cc08-45cf-9d1e-       |
|                  | 5d9b77cd4e63/snap                                                                |
| disk_format      | qcow2                                                                            |
| id               | 1cf5aa5a-cc08-45cf-9d1e-5d9b77cd4e63                                             |
| locations        | [{"url": "rbd://20660ae0-3a39-40b4-9cbe-97cc046cbe40/images/1cf5aa5a-cc08-45cf-  |
|                  | 9d1e-5d9b77cd4e63/snap", "metadata": {}}]                                        |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | cirros520                                                                        |
| owner            | 7447893a665043fda4dcf573c5061173                                                 |
| protected        | False                                                                            |
| size             | 15731712                                                                         |
| status           | active                                                                           |
| tags             | []                                                                               |
| updated_at       | 2019-05-16T17:13:43Z                                                             |
| virtual_size     | None                                                                             |
| visibility       | shared                                                                           |
+------------------+----------------------------------------------------------------------------------+


# rbd ls images |grep 1cf5a
1cf5aa5a-cc08-45cf-9d1e-5d9b77cd4e63


# ceph osd map images 1cf5aa5a-cc08-45cf-9d1e-5d9b77cd4e63
osdmap e68 pool 'images' (1) object '1cf5aa5a-cc08-45cf-9d1e-5d9b77cd4e63' -> pg 1.70ef4ae7 (1.67) -> up ([1], p1) acting ([1], p1)

 

 

 

你可能感兴趣的:(ceph,openstack)