openstack-超融合基础架构之ceph对接openstack glance/nova/cinder/cinder-backup

openstack-miatka超融合基础架构之ceph对接openstack glance/nova/cinder/cinder-backup

OpenStack项目中Glance、Nova、Cinder、Swift等服务是项目中存储的服务,服务种类的不同也决定着提供服务的方式不同,在OpenStack的设计中Glance被用于镜像资源的存储,称为镜像服务。Nova则用于虚拟机实例操作和实例资源的存储,称之为计算服务。Cinder则是提供虚拟机的块存储服务。Swift则是提供对象存储服务,在下面的任务实施中重点讲解Ceph与Glance、Nova、cinder三者之间的结合和作为OpenStack服务后端统一存储的配置说明。

本次实验采用All-in-one的OpenStack节点作为Ceph的客户端,(All-in-one 说白了就是单节点的openstack实验平台)

后期再写ceph的搭建过程,现在重点写对接;

实验工具:

 

openstack(mitaka) All-in-one(centos7.2-1511)
ceph 三节点 ceph的分布式存储集群,注意:不是文件系统(即不是cephFS)。

 

 

这里个大家先展示下对接成功的例子:

[root@xiandian ~]# rbd ls images     //查询imges pool
0a2ce51d-bcd6-4a5b-8320-ca7db7c5aa9f
[root@xiandian ~]# glance image-list
+--------------------------------------+------+
| ID                                   | Name |
+--------------------------------------+------+
| 0a2ce51d-bcd6-4a5b-8320-ca7db7c5aa9f | 1    |
+--------------------------------------+------+


[root@xiandian ~]# rbd ls vms
6a0a4266-815b-48dc-beb6-5123861a6e2f_disk

[root@xiandian ~]# nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| 6a0a4266-815b-48dc-beb6-5123861a6e2f | a    | ACTIVE | -          | Running     | sharednet1=192.168.200.6 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+


[root@xiandian ~]# cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| 295c9b3f-755f-44a8-9481-db6952c22752 | available | test |  20  |      -      |  false   |             |
| 2a064ae1-abfc-4ecc-8587-088f2f4caa89 | available | test |  1   |      -      |  false   |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
[root@xiandian ~]# rbd ls volumes
volume-295c9b3f-755f-44a8-9481-db6952c22752
volume-2a064ae1-abfc-4ecc-8587-088f2f4caa89

[root@xiandian ~]# cinder backup-list
+--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+
|                  ID                  |              Volume ID               |   Status  | Name | Size | Object Count |   Container   |
+--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+

| 7144290b-7cef-437f-9dac-0e2c9f2fd9fa | 2a064ae1-abfc-4ecc-8587-088f2f4caa89 | available |  -   |  1   |      0       |    backups    |
+--------------------------------------+--------------------------------------+-----------+------+------+--------------+---------------+
[root@xiandian ~]# rbd ls backups
volume-2a064ae1-abfc-4ecc-8587-088f2f4caa89.backup.7144290b-7cef-437f-9dac-0e2c9f2fd9fa

1、在All-in-one openstack 平台上安装ceph客户端,这里通过ceph-deploy 工具来安装;

将 openstack 节点与 ceph 进行时间同步,我用NTP 服务器,
# ntpdate ceph-1




在ceph server 服务器上执行:

# ceph-deploy install 

等待安装完成。。。。。。。


2、创建 ceph 存储 池 pool

使用以下命令创建新池时:

ceph osd pool create  pg_num
  • 少于5个OSD设置pg_num为128
  • 5到10个OSD设置pg_num为512
  • 10到50个OSD设置pg_num为1024
  • 如果您有超过50个OSD,您需要了解权衡以及如何自己计算pg_num
在ceph server 服务器上执行:

[root@ceph-server1 ~]# ceph osd pool create volumes 128
// 创建volumes池,对应Cinder服务

[root@ceph-server1 ~]# ceph osd pool create images 128
// 创建images池,对应Glance服务

[root@ceph-server1 ~]# ceph osd pool create vms 128
// 创建vms池,对应Nova服务

[root@ceph-server1 ~]# ceph osd pool create backups 128
// 创建backups池,对应Cinder-backup服务。但这个backup在同一Ceph集群中,意义不大,既然是做备份的话,就应该跨集群或者跨机房、跨区域来达到备份容灾的目的。

3、创建用户;

在ceph中创建了cinder、glance、cinder-backup等用户,并做了权限控制

[root@ceph-server1 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
//这里cinder和nova组件都使用一个用户。

[root@ceph-server1 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

[root@ceph-server1 ~]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

 

4、将生成的keyring文件,保存在相应的节点上,并修改为相应的权限;注意:替换 为 对应的IP 或者是 主机名

//glance用户分配keyring,并修改权限:

[root@ceph-server1 ~]# ceph auth get-or-create client.glance | ssh   tee /etc/ceph/ceph.client.glance.keyring
# ssh   chown glance:glance /etc/ceph/ceph.client.glance.keyring



//cinder用户分配keyring,并修改权限:

[root@ceph-server1 ~]# ceph auth get-or-create client.cinder | ssh   tee /etc/ceph/ceph.client.cinder.keyring
# ssh   chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring



//cinder-backup用户分配keyring,并修改权限:

[root@ceph-server1 ~]# ceph auth get-or-create client.cinder-backup | ssh  tee /etc/ceph/ceph.client.cinder-backup.keyring
[root@ceph-server1 ~]# ssh   chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring


5、在libvirt上添加secret key
 

ceph server 节点执行:
// 获取cinder keyring,并保存到一个临时文件中
[root@ceph-server1 ~]# ceph auth get-key client.cinder | ssh  tee /root/client.cinder.key



openstack 节点执行:

// 生成一个UUID
[root@opensatck ~]# uuidgen
457eb676-33da-42ec-9a8c-9293d545c337



// 修改secret.xml文件,注意替换下面的uuid
[root@opensatck ~]# cat > secret.xml <
  457eb676-33da-42ec-9a8c-9293d545c337
  
    client.cinder secret
  

EOF


[root@opensatck ~]# virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created



// 设置libvirt的secret key,并删除之前的key临时文件
[root@opensatck ~]# virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat /root/client.cinder.key) && rm client.cinder.key 


//查看secret key
[root@opensatck ~]# virsh secret-list


6、修改各组件配置文件:

glance:

[root@opensatck ~]# vi /etc/glance/glance-api.conf 
[DEFAULT]
rpc_backend = rabbit
show_image_direct_url = True
[glance_store]
#stores = file,http
#file =
#filesystem_store_datadir = /var/lib/glance/images/
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8

 

cinder: 

[root@opensatck ~]# vi /etc/cinder/cinder.conf
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337


[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 127.0.0.1
enabled_backends = ceph
glance_api_servers = http://xiandian:9292


[lvm]
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_group = cinder-volumes
#iscsi_protocol = iscsi
#iscsi_helper = lioadm

cinder_backup: 

[root@opensatck ~]# vi /etc/cinder/cinder.conf  
[DEFAULT]   //增加配置
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

nova:

[root@opensatck ~]# vi /etc/nova/nova.conf

[libvirt]
virt_type = qemu
inject_key = false
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
disk_cachemodes="network=writeback"

systemctl restart openstack-glance-api.service openstack-glance-registry.service 

systemctl restart  libvirtd.service openstack-nova-compute.service openstack-nova-api.service  openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl restart  openstack-cinder-api.service openstack-cinder-scheduler.service openstack-cinder-volume.service target.service openstack-cinder-backup.service

你可能感兴趣的:(openstack)