这里不记录ceph安装的过程了,ceph安装详情见这里:http://www.vpsee.com/2015/07/install-ceph-on-centos-7/
ceph官方安装文档:http://docs.ceph.com/ceph-deploy/docs/install.html
ceph常用命令:http://zhanguo1110.blog.51cto.com/5750817/1543032
安装ceph client
# 创建一个pool(具体根据你实际pg来,使用ceph-deploy安装完的时候会自动帮你创建一个rbd的pool) # 这里glance、cinder、nova共用一个pool,实际生产环境 [root@ceph01 ~(keystone_admin)]# ceph osd pool create rbd 128 # glance-api、nova-compute、cinder-backup、cinder-volume节点安装ceph client包 [root@ceph01 ~(keystone_admin)]# yum install python-rbd ceph
建立ceph client认证
# 创建ceph认证的用户,这里glance、cinder、nova共用一个ceph认证用户 # 官方文档上建议分别为nova、cinder、glance创建不同的用户 [root@ceph01 ~(keystone_admin)]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd' # 查看生成的rbd用户keyring [root@ceph01 ~(keystone_admin)]# ceph auth get-or-create client.rbd [client.rbd] key = AQBKGHBWzJCYORAAABHki+tWoOFgiTZL8FNnaA== # 以下两步操作,在glance-api、cinder-volume、cinder-backup、nova-compute节点上执行 # 创建keyring文件,添加如下内容 [root@ceph01 ~(keystone_admin)]# vim /etc/ceph/ceph.client.rbd.keyring [client.rbd] key = AQBKGHBWzJCYORAAABHki+tWoOFgiTZL8FNnaA== # 因为nova、cinder、glance共用一个用户,所以文件权限改为777 [root@ceph01 ~(keystone_admin)]# ll /etc/ceph/ceph.client.rbd.keyring -rwxrwxrwx 1 root root 61 Dec 15 21:44 /etc/ceph/ceph.client.rbd.keyring # 配置libvirt secret key,libvirt进程需要cinder keyring(这里也就是client.rbd) # 这样它才能访问ceph集群挂载块设备 # 使用tee命令创建一个暂时的文件 [root@ceph01 ~(keystone_admin)]# ceph auth get-key client.rbd | ssh {your-compute-node} tee client.cinder.key # 针对所有计算节点 # 以下操作在所有计算节点上执行 [root@ceph01 ~(keystone_admin)]# uuidgen # 生成随机的uuid aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5 cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5</uuid> <usage type='ceph'> <name>client.rbd secret</name> </usage> </secret> EOF sudo virsh secret-define --file secret.xml Secret aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5 created sudo virsh secret-set-value --secret aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5 --base64 $(cat client.rbd.key) && rm client.rbd.key secret.xml # 实际上计算节点的uuid可以不一致,保持一致单纯只是从平台一致性来考虑的。
OpenStack rbd配置
# glance rbd配置 [root@ceph01 ~(keystone_admin)]# vim /etc/glance/glance-api.conf [DEFAULT] show_image_direct_url = True # 启动镜像copy-on-write克隆功能 [glance_store] default_store = rbd stores = rbd filesystem_store_datadir=/var/lib/glance/images/ rbd_store_pool = rbd rbd_store_user = rbd rbd_store_ceph_conf = /etc/ceph/ceph.conf rbd_store_chunk_size = 8 [paste_deploy] flavor = keystone # 禁用glance cache管理,如果你的flavor=keystone+cachemanagement,请修改 # cinder rbd配置 [root@ceph01 ~(keystone_admin)]# vim /etc/cinder/cinder.conf volume_driver = cinder.volume.drivers.rbd.RBDDriver rbd_pool = rbd rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 glance_api_version = 2 # 官方文档说如果你配置了cinder multi backends,必须配置这个 rbd_user = rbd rbd_secret_uuid = aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5 # nova rbd配置 [libvirt] inject_password = False # openstack boot from volume启动instance的时候不支持file injection inject_key = False # ditto inject_partition = -2 # ditto virt_type = kvm live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST images_type = rbd images_rbd_pool = rbd images_rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = rbd rbd_secret_uuid = aa03e7e8-6fcc-443f-94aa-ac169bfd0fd5 disk_cachemodes="network=writeback" # 每个计算节点上执行,编辑ceph配置文件 [root@ceph01 ~(keystone_admin)]# vim /etc/ceph/ceph.conf # 开启admin socket,有助于排错 [client] rbd cache = true rbd cache writethrough until flush = true admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok log file = /var/log/qemu/qemu-guest-$pid.log rbd concurrent management ops = 20 [root@ceph01 ~(keystone_admin)]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/ [root@ceph01 ~(keystone_admin)]# chown qemu:qemu /var/run/ceph/guests /var/log/qemu/ OpenStack 配置最佳实践(转载自:http://www.wzxue.com/openstack-ceph-kilo/) Ceph.conf : [client] rbd cache = true rbd cache writethrough until flush = true rbd concurrent management ops = 20 admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok log file = {{ rbd_client_log_file }} GLANCE Disable local cache: s/flavor = keystone+cachemanagement/flavor = keystone/ Expose images URL: show_image_direct_url = T w_scsi_model=virtio-scsi # for discard and perf hw_disk_bus=scsi Nova: hw_disk_discard = unmap # enable discard support (be careful of perf) inject_password = false # disable password injection inject_key = false # disable key injection inject_partition = -2 # disable partition injection disk_cachemodes = "network=writeback" # make QEMU aware so caching works live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST" Cinder: glance_api_version = 2 # 最后重启下服务 [root@ceph01 ~(keystone_admin)]# service openstack-glance-api restart [root@ceph01 ~(keystone_admin)]# service openstack-nova-compute restart [root@ceph01 ~(keystone_admin)]# service openstack-cinder-volume restart
参考链接
http://docs.ceph.com/docs/master/rbd/rbd-openstack/
http://my.oschina.net/JerryBaby/blog/376580?fromerr=wNPJrqPP#OSC_h2_1