ceph 维护系列(二)--卸载osd

一 摘要

本文主要介绍从ceph 某台节点上卸载一块或者多块osd(硬盘)

二 环境信息

2.1 操作系统版本

[root@proceph05 ~]# cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core) 
[root@proceph05 ~]#

2.2 ceph 版本

[cephadmin@proceph05 ~]$ ceph -v
ceph version 14.2.15 (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
[cephadmin@proceph05 ~]$

2.3 ceph 集群概况

目前有5台ceph 节点,每台上各6块硬盘。

三 实施

本文参考官方文档https://docs.ceph.com/en/nautilus/rados/operations/add-or-rm-osds/

3.1 查看osd 状态

ceph osd tree

3.2 Take the OSD out of the Cluster

ceph osd out osd.29

3.3 Observe the Data Migration

[cephadmin@proceph05 ~]$ ceph -w
  cluster:
    id:     9cdee1f8-f168-4151-82cd-f6591855ccbe
    health: HEALTH_WARN
            10 nearfull osd(s)
            1 pool(s) nearfull
            Low space hindering backfill (add storage if this doesn't resolve itself): 19 pgs backfill_toofull
            4 pgs not deep-scrubbed in time
            8 pgs not scrubbed in time

  services:
    mon: 5 daemons, quorum proceph01,proceph02,proceph03,proceph04,proceph05 (age 5w)
    mgr: proceph01(active, since 16M), standbys: proceph03, proceph02, proceph04, proceph05
    osd: 30 osds: 30 up (since 9M), 27 in (since 41h); 20 remapped pgs

  data:
    pools:   1 pools, 512 pgs
    objects: 13.89M objects, 53 TiB
    usage:   158 TiB used, 61 TiB / 218 TiB avail
    pgs:     541129/41679906 objects misplaced (1.298%)
             486 active+clean
             19  active+remapped+backfill_toofull
             6   active+clean+scrubbing+deep
             1   active+remapped+backfilling

  io:
    client:   1018 KiB/s rd, 41 MiB/s wr, 65 op/s rd, 2.83k op/s wr
    recovery: 13 MiB/s, 3 objects/s

  progress:
    Rebalancing after osd.28 marked out
      [=========================.....]
    Rebalancing after osd.29 marked out
      [==========================....]
    Rebalancing after osd.27 marked out
      [=========================.....]

3.4 Stopping the OSD

3.5 Removing the OSD

你可能感兴趣的:(ceph)