Openstack存储总结之:使用Ceph集群作为后端统一存储



前提条件

一个可以正常运行的Ceph集群,该集群采用ceph-deploy创建,dataprovider为管理节点,包括三个MON,四个MON
以及Openstack集群,其中Openstack集群假设Cinder,Glance都是安装在叫做controllernode的节点上,computernode,networknode分别为计算以及网络节点


创建Pool

在dataprovider上创建几个Pool

ceph osd pool create volumes 32
ceph osd pool create images 32
ceph osd pool create backups 32
ceph osd pool create vms 32


配置Ceph Client

在glance节点执行下面的命令

sudo yum install python-ceph

nova-compute, cinder-backup,cinder-volume节点,执行下面的命令:

sudo yum install ceph


在controller,computenode上创建leadorceph用户
sudo useradd -d /home/leadorceph -m leadorceph
sudo passwd leadorceph
echo "leadorceph  ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/leadorceph
sudo chmod 0440 /etc/sudoers.d/leadorceph

 
   
使用leadorceph用户的身份执行sudo visudo命令,然后修改Defaults requiretty 为Defaults:ceph !requiretty

在dataprovider 上设置无密码登陆
ssh-copy-id leadorceph@controllernode
ssh-copy-id leadorceph@computenode


配置授权

在dataprovider节点上执行下面的命令
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

ceph auth get-or-create client.glance | ssh controllernode sudo tee /etc/ceph/ceph.client.glance.keyring
ssh controllernode sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh controllernode  sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh controllernode  sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh controllernode  sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh controllernode  sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

ceph auth get-key client.cinder | ssh computenode tee client.cinder.key


在computenode上执行下面的操作


切换用户su leadorceph
 
   
uuidgen
上面命令的输出为78f475b1-846f-47ba-8145-9f305de5c516
cat > secret.xml <
  78f475b1-846f-47ba-8145-9f305de5c516
  
    client.cinder secret
  

EOF
sudo virsh secret-define --file secret.xml
sudo virsh secret-set-value --secret 78f475b1-846f-47ba-8145-9f305de5c516 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml


配置glance

在controller节点,切换到root用户
编辑 /etc/glance/glance-api.conf 文件中的 [DEFAULT]:
 
   
default_store = rbd
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8

配置cinder


在/etc/cinder/cinder.conf中添加下面的内容:
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2

rbd_user = cinder


rbd_secret_uuid = 78f475b1-846f-47ba-8145-9f305de5c516


配置cinder backup

backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true


配置VM挂载的权限

在 /etc/nova/nova.conf中添加以下内容
rbd_user = cinder

rbd_secret_uuid = 78f475b1-846f-47ba-8145-9f305de5c516

编辑computenode 上的ceph配置
 
   
[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

H以及I版本的改动
computernode 的/etc/nova/nova.conf 文件添加下面的内容:
 
   
libvirt_images_type = rbd
libvirt_images_rbd_pool = vms
libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 78f475b1-846f-47ba-8145-9f305de5c516
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"


 
  

重启服务


 
   
sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restart


验证

创建硬盘,镜像后,在dadaprovider节点查看rados pool的状况
rados -p images ls
rados -p volumes ls













你可能感兴趣的:(云计算介绍与实战,大数据存储与分析)