添加ceph osd,创建rbd pool及rbd设备,配置cinder后端

1.   为ceph新增OSD节点

 

新增两个OSD,使用ceph-deploy创建的osd每个只有5G大小,调试起来肯定不方便

原来的OSD是:

 

osd.0

osd.1

 

现在增加osd.2、osd.3:

 

Osd编号

所在osd节点

对应磁盘

大小

Osd.0

10.33.41.136/192.168.11.5

/dev/vdb

5G

Osd.1

10.33.41.139/192.168.11.6

/dev/vdb

5G

Osd.2

10.33.41.136/192.168.11.5

/dev/vdc

20G

Osd.3

10.33.41.139/192.168.11.6

/dev/vdc

20G

 

(1)扩展10.33.41.55上的cinder-volumes大小

对cinder-volumes扩展128G内存

#pvcreate /dev/sda3

#vgextend cinder-volumes /dev/sda3

(2)新增两个云硬盘,挂载到10.33.41.136、10.33.41.139上

(3)新增osd.2(10.33.41.136)、osd.3(10.33.41.139)

增加过程参考:

http://docs.ceph.org.cn/start/quick-ceph-deploy/

 

我的添加过程:

在10.33.41.136:

(1)先把vdc分区,做一个vdc1

vdc   253:32   0    20G 0 disk

└─vdc1 253:33   0    20G 0 part

(2)然后将/dev/vdc1格式化为ext4

# mkfs.ext4 /dev/vdc1

(3)然后将/dev/vdc1挂载到/var/vdc1

# mount -t ext4 /dev/vdc1 /var/osd2

(4)然后prepare

# ceph-deploy  osd prepare lxpnode2:/var/osd2

(5)激活osd

# ceph-deploy  osd activate lxpnode2:/var/osd2

(6)查看

# ceph -s

   cluster 2a5f361c-2b64-4512-b710-719409f05f10

    health HEALTH_OK

    monmap e1: 1 mons at {lxpnode1=192.168.11.4:6789/0}

           election epoch 2, quorum 0 lxpnode1

    osdmap e23: 3 osds: 3 up, 3 in

     pgmap v1908: 252 pgs, 3 pools, 12914 kB data, 7 objects

           8067 MB used, 47944 MB / 59080 MB avail

                 252 active+clean

recovery io 1700 kB/s, 1 keys/s, 0objects/s

 

# ceph osd stat

    osdmap e23: 3 osds: 3 up, 3 in

#ceph osd dump

epoch 23

fsid 2a5f361c-2b64-4512-b710-719409f05f10

created 2016-04-28 09:32:07.631465

modified 2016-05-04 13:49:31.018699

flags

pool 0 'rbd' replicated size 2 min_size 1crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flagshashpspool stripe_width 0

pool 1 'rbd_pool' replicated size 2min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 60 pgp_num 60last_change 10 flags hashpspool stripe_width 0

pool 2 'images' replicated size 2 min_size1 crush_ruleset 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 15flags hashpspool stripe_width 0

       removed_snaps [1~1]

max_osd 3

 

同样的方法新增10.33.41.139上的/dev/vdc为新的osd3

 

操作之后:

root@lxpnode1:/etc/ceph# ceph -s

   cluster 2a5f361c-2b64-4512-b710-719409f05f10

    health HEALTH_OK

    monmap e1: 1 mons at {lxpnode1=192.168.11.4:6789/0}

           election epoch 2, quorum 0 lxpnode1

     osdmap e33: 4 osds: 4up, 4 in

     pgmap v1941: 252 pgs, 3 pools, 12914 kB data, 7 objects

           13233 MB used, 61767 MB / 79110 MB avail

                 252 active+clean

recovery io 1756 kB/s, 0 objects/s

 

然后创建新的rbd dev,大小为10G,作为cinder的后端

 

 

 

2.   创建rbd pool和rbd blk设备

# ceph osd pool create cinder 100

# ceph osd lspools

0 rbd,1 rbd_pool,2 images,3 cinder,

# rbd create --size 18432 cinder/myblk  这里保守一点,使用18G

# rbd ls cinder

myblk

# rbd map myblk -p cinder

# lsblk

NAME  MAJ:MIN RM   SIZE RO TYPEMOUNTPOINT

vda   253:0    0    20G 0 disk

├─vda1 253:1    0  19.5G 0 part /

├─vda2 253:2    0     1K 0 part

└─vda5 253:5    0   510M 0 part [SWAP]

rbd1  251:0    0    4G  0 disk

rbd2   250:0    0   18G  0 disk

 

3.   将rbd配置为cinder后端

(1)拷贝/etc/ceph到openstack环境

然后openstack和ceph节点中的hosts都改一下

192.168.11.8 ubuntu

192.168.11.4 lxpnode1

192.168.11.5 lxpnode2

192.168.11.6 lxpnode3

~                     

(2)cinder.conf

[DEFAULT]

enabled_backends = ceph

 

[ceph]

volume_driver =cinder.volume.drivers.rbd.RBDDriver

rbd_pool = cinder    //上面创建的rbd pool

rbd_ceph_conf = /etc/ceph/ceph.conf

rbd_flatten_volume_from_snapshot = false

rbd_max_clone_depth = 5

rbd_store_chunk_size = 4

rados_connect_timeout = -1

glance_api_version = 2

(3)重启cinder-volume服务

 

你可能感兴趣的:(添加ceph osd,创建rbd pool及rbd设备,配置cinder后端)