为ceph集群新增一个osd

新增两个OSD,使用ceph-deploy创建的osd每个只有5G大小,调试起来肯定不方便

原来的OSD是:

 

osd.0

osd.1

 

现在增加osd.2、osd.3:

 

Osd编号

所在osd节点

对应磁盘

大小

Osd.0

10.33.41.136/192.168.11.5

/dev/vdb

5G

Osd.1

10.33.41.139/192.168.11.6

/dev/vdb

5G

Osd.2

10.33.41.136/192.168.11.5

/dev/vdc

20G

Osd.3

10.33.41.139/192.168.11.6

/dev/vdc

20G

 

(1)扩展10.33.41.55上的cinder-volumes大小

对cinder-volumes扩展128G内存

#pvcreate /dev/sda3

#vgextend cinder-volumes /dev/sda3

(2)新增两个云硬盘,挂载到10.33.41.136、10.33.41.139上

(3)新增osd.2(10.33.41.136)、osd.3(10.33.41.139)

增加过程参考:

http://docs.ceph.org.cn/start/quick-ceph-deploy/

 

我的添加过程:

在10.33.41.136:

(1)先把vdc分区,做一个vdc1

vdc   253:32   0    20G 0 disk

└─vdc1 253:33   0    20G 0 part

(2)然后将/dev/vdc1格式化为ext4

# mkfs.ext4 /dev/vdc1

(3)然后将/dev/vdc1挂载到/var/vdc1

# mount -t ext4 /dev/vdc1 /var/osd2

(4)然后prepare

# ceph-deploy  osd prepare lxpnode2:/var/osd2

(5)激活osd

# ceph-deploy  osd activate lxpnode2:/var/osd2

(6)查看

# ceph -s

   cluster 2a5f361c-2b64-4512-b710-719409f05f10

    health HEALTH_OK

    monmap e1: 1 mons at {lxpnode1=192.168.11.4:6789/0}

           election epoch 2, quorum 0 lxpnode1

    osdmap e23: 3 osds: 3 up, 3 in

     pgmap v1908: 252 pgs, 3 pools, 12914 kB data, 7 objects

           8067 MB used, 47944 MB / 59080 MB avail

                 252 active+clean

recovery io 1700 kB/s, 1 keys/s, 0objects/s

 

# ceph osd stat

    osdmap e23: 3 osds: 3 up, 3 in

#ceph osd dump

epoch 23

fsid 2a5f361c-2b64-4512-b710-719409f05f10

created 2016-04-28 09:32:07.631465

modified 2016-05-04 13:49:31.018699

flags

pool 0 'rbd' replicated size 2 min_size 1crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flagshashpspool stripe_width 0

pool 1 'rbd_pool' replicated size 2min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 60 pgp_num 60last_change 10 flags hashpspool stripe_width 0

pool 2 'images' replicated size 2 min_size1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 15flags hashpspool stripe_width 0

       removed_snaps [1~1]

max_osd 3

你可能感兴趣的:(为ceph集群新增一个osd)