Ceph 常用操作指令

查看pool

ceph osd pool ls

 

查看pool中对象

 

rados -p ${poolname} ls

 

 

删除OSD:

systemctl list-units|grep ceph # 查看服务
systemctl stop [email protected]
ceph osd out 5
ceph osd crush remove osd.5
ceph auth del osd.5
ceph osd rm 5

# 批量删除脚本
loop=0; while [ "$loop" -lt 7 ]; do
systemctl stop ceph-osd@${loop}.service
ceph osd out ${loop}
ceph osd crush remove osd.${loop}
ceph auth del osd.${loop}
ceph osd rm ${loop}
echo "finsh $loop "
loop=$((loop + 1))
done

 

 

想看刚才删除的OSD对应的哪个盘符,并且格式化磁盘:


lvs

 

记录下VGID a12开头

Ceph 常用操作指令_第1张图片

 

可以看到对应的就是sdb
然后用vgremove 删掉vg就可以完成硬盘的格式化

 

创建完pool,记得enable rbd

 

ceph osd pool application enable images rbd
ceph osd pool application enable compute rbd
ceph osd pool application enable volumes rbd

 

 

导出编辑测试crush map

 

ceph osd getcrushmap -o /home/ttt.x # 导出二进制crushmap
crushtool -d ttt.t -o ttt.txt # 反编译crushmap为文本
crushtool -c ttt.txt -o ttt.x # 编译crushmap
crushtool -i ttt.x --test --min-x 0 --max-x 9 --num-rep 3 --ruleset 0 --show_mappings # 测试
crushtool -i ttt.x --test --min-x 0 --max-x 100000 --num-rep 3 --ruleset 0 --show_utilization # 测试分布
ceph osd setcrushmap -i ttt.x # 注入集群crushmap,使之生效

 

 

删除pool


首先打开ceph.conf中:
[mon]
mon allow pool delete = true
执行
ceph osd pool delete test test --yes-i-really-really-mean-it

 

向ceph中放入数据

 

echo {Test-data} > testfile.txt
rados put test-object-1 testfile.txt --pool=images
ceph osd map images test-object-1

 

 

移除缓冲池

 

ceph osd tier remove-overlay images
ceph osd tier remove images images-cache

 

 

查看object位置

 

ceph osd map images test-object-1

 

 

尝试刷新缓存区到数据区

 

rados -p images-cache cache-try-flush-evict-all

 


强制刷新缓存区到数据区

 

rados -p images-cache cache-flush-evict-all

 

 

批量插入数据脚本

 

touch /home/file
dd if=/dev/zero of=/home/file bs=1M count=128

loop=0; while [ "$loop" -lt 40 ]; do
rados put objectx2-${loop} /home/file --pool=images
loop=$((loop + 1))
echo "finsh $loop "
done

 

 

新加入一个OSD

 

# 这里以sdb举例
ceph-volume lvm zap /dev/sdb
ceph-volume lvm create --data /dev/sdb
ceph-volume lvm activate --all
# 新加入的osd一般不在crushmap中,请按照实际情况重新编辑导入crushmap,使之生效

 

 

删除快照

 

 

# 删除快照
# rbd snap rm images/2b174f8e-1e81-4b0f-ac58-010e4a3fab45@snap
Removing snap: 0% complete...failed.
rbd: snapshot 'snap' is protected from removal.
2018-11-09 15:36:26.722089 7f3fdb7fe700 -1 librbd::Operations: snapshot is protected

# 将受保护快照改为未受保护
# rbd snap unprotect images/2b174f8e-1e81-4b0f-ac58-010e4a3fab45@snap
2018-11-09 15:36:58.434813 7f2747fff700 -1 librbd::SnapshotUnprotectRequest: cannot unprotect: at least 4 child(ren) [683083b714e54,683596d147edb,683ad439a8774,683ce4342b692] in pool 'compute'
2018-11-09 15:36:58.434834 7f2747fff700 -1 librbd::SnapshotUnprotectRequest: encountered error: (16) Device or resource busy
2018-11-09 15:36:58.434851 7f2747fff700 -1 librbd::SnapshotUnprotectRequest: 0x563fdf679bd0 should_complete_error: ret_val=-16
2018-11-09 15:36:58.435872 7f2747fff700 -1 librbd::SnapshotUnprotectRequest: 0x563fdf679bd0 should_complete_error: ret_val=-16
rbd: unprotecting snap failed: (16) Device or resource busy

# 查找快照的child
# rbd children images/2b174f8e-1e81-4b0f-ac58-010e4a3fab45@snap
compute/0b6c0764-d31e-49a1-b5ad-06c912ca29fd_disk

# 删除镜像
# rbd rm compute/0b6c0764-d31e-49a1-b5ad-06c912ca29fd_disk
Removing image: 100% complete...done.

# 将受保护快照改为未受保护
# rbd snap unprotect images/2b174f8e-1e81-4b0f-ac58-010e4a3fab45@snap

# 删除快照
# rbd snap rm images/2b174f8e-1e81-4b0f-ac58-010e4a3fab45@snap
Removing snap: 100% complete...done.

 

 

 

 

 

你可能感兴趣的:(ceph)