04-手动部署Ceph15.2.5(octopus)osd服务配置

文章结构

一、 安装部署

  1. 准备集群基础配置
  2. ceph-osd配置
  3. 启动ceph-osd服务

当前在虚拟机osd2(192.168.10.43)上

1. 准备集基础配置

monosd(192.168.10.42)上已经创建好的相关配置文件复制到本机

bash> scp root@monosd:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
bash> scp root@monosd:/var/lib/ceph/bootstrap-osd/ceph.keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
bash> scp root@monosd:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

2. ceph-osd配置

bash> ceph-volume lvm prepare --data /dev/sdb
bash> ceph-volume lvm list
---
====== osd.0 =======
  [block]       /dev/ceph-920942d0-38fc-4b38-9193-81677af5d5e5/osd-block-da930cdb-d64c-481f-ac0c-e0d434adf2d8
      block device              /dev/ceph-920942d0-38fc-4b38-9193-81677af5d5e5/ 
                                  osd-block-da930cdb-d64c-481f-ac0c-e0d434adf2d8
      block uuid                2CJorX-iXz0-Tqjn-Zrnz-3hdf-hOEi-1eJWtQ
      cephx lockbox secret
      cluster fsid              611b25ed-0794-43a5-954c-26e2ba4191a3
      cluster name              ceph
      crush device class        None
      encrypted                 0
      osd fsid                  da930cdb-d64c-481f-ac0c-e0d434adf2d8
      osd id                    0
      osdspec affinity
      type                      block
      vdo                       0
      devices                   /dev/sdb

bash> ceph-volume lvm activate 0 da930cdb-d64c-481f-ac0c-e0d434adf2d8
      #ceph-volume lvm activate  {ID} {FSID}

/dev/sdb可以通过 lsblk 或 fdisk -l 查看空闲磁盘得到,根据情况自行进行替换
{ID} {FSID} 是通过ceph-volume lvm list结果信息得到的

3. 启动ceph-osd服务

将服务添加到开机动,并随后开启服务

 #此处@后的数字为 {ID}
bash> systemctl enable ceph-osd@0
bash> systemctl start ceph-osd@0
 #此时使用命令ceph -s查看状态,可以看到当前osd的启动状态
bash> ceph -s
---
  cluster:
    id:     611b25ed-0794-43a5-954c-26e2ba4191a3
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            Degraded data redundancy: 1 pg undersized
            OSD count 1 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum monosd (age 46m)
    mgr: monosd_mgr(active, starting, since 0.556561s)
    osd: 1 osds: 1 up (since 2m), 1 in (since 2m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   1.2 GiB used, 19 GiB / 20 GiB avail
    pgs:     100.000% pgs not active
             1 undersized+peered

如法炮制,我将在另外两台机器上做相同操作。操作后使用ceph -s查看如下:

bash> ceph -s
---
  cluster:
    id:     611b25ed-0794-43a5-954c-26e2ba4191a3
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum monosd (age 95s)
    mgr: monosd_mgr(active, since 83s)
    osd: 3 osds: 3 up (since 79s), 3 in (since 7m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   4.2 GiB used, 56 GiB / 60 GiB avail
    pgs:     1 active+clean

至此,ceph的精减版基础集群部署完成。REF.
Ceph 15.25 手动部署系列笔记

你可能感兴趣的:(04-手动部署Ceph15.2.5(octopus)osd服务配置)