ceph-deploy disk list opennebula11
ceph-deploy disk zap opennebula11 /dev/sdb
ceph-deploy osd create --data /dev/sdb opennebula00
直接parted /dev/sdm , 做好分区/dev/sdm1,格式化/dev/sdm1 mkfs.xfs 出错,cannot open /dev/sdm: Device or resource busy
解决方法:
dmsetup ls 查看谁在占用,找到ceph-**字样(ceph-为lsblk显示的块设备具体信息)
使用dmsetup 删除字样
dmsetup remove ceph-
lsblk 查看设备信息,可以看到ceph-**等标识等标识消失
mkfs.xfs -f /dev/sdm 成功通过
ceph osd crush rm-device-class osd.8
for i in 30 31 16 17 9 10 23 24; do ceph osd crush rm-device-class osd.$i;done
ceph osd crush set-device-class ssd osd.8
for i in 30 31 16 17 9 10 23 24; do ceph osd crush set-device-class ssd osd.$i;done
ceph osd crush rule create-replicated ssd_rule default host ssd
ceph osd pool create cache 64 64 ssd_rule
ceph osd pool get cache crush_rule
ceph osd tier add one cache
ceph osd tier cache-mode cache writeback
ceph osd tier set-overlay one cache
ceph osd pool set cache hit_set_type bloom
ceph osd pool set cache hit_set_count 1
ceph osd pool set cache hit_set_period 3600 # 1 hour
ceph osd pool set cache target_max_bytes 1099511627776 # 1 TB
ceph osd pool set cache target_max_objects 256
ceph osd pool set cache cache_min_flush_age 60
ceph osd pool set cache cache_min_evict_age 600
#### 脏对象占比达到40%就将数据刷盘
ceph osd pool set cache cache_target_dirty_ratio 0.4
#### 当脏对象占比达到60%时开始高速刷盘
ceph osd pool set cache cache_target_dirty_high_ratio 0.6
ceph osd pool set cache cache_target_full_ratio 0.8
ceph osd pool set cache min_read_recency_for_promote 1
ceph osd pool set cache min_write_recency_for_promote 1
ceph osd tier cache-mode cache forward --yes-i-really-mean-it
rados -p cache ls
rados -p cache cache-flush-evict-all
ceph osd tier remove-overlay one
ceph osd tier remove one cache
ceph osd tier cache-mode cache none
ceph osd tier remove one cache
rados -p cache ls
rbd ls -l -p one --id libvirt
rbd list -p one
rbd rm -p one one-51-116-0
ceph osd getcrushmap -o crushmap.txt
crushtool -d crushmap.txt -o crushmap-decompile
vi crushmap-decompile
# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default class ssd
step chooseleaf firstn 1 type host
step emit
step take default class hdd
step chooseleaf firstn -1 type host
step emit
}
choose表示选择结果为故障域,chooseleaf表示选择故障域下面的OSD节点;firstn用于副本池,indep用于EC池;后面的数字表示要选择的rep数目,正数表示要选择的副本数,0表示选择所有副本,负数表示选择除去相应副本数后剩余的副本
crushtool -c crushmap-decompile -o crushmap-compiled
ceph osd setcrushmap -i crushmap-compiled