Ceph官方安装文档
前文列表
OpenStack Train版安装
② Ceph版本选择与安装
③ Ceph—RBD块设备介绍与创建
要将Ceph块设备与OpenStack一起使用,必须有权访问正在运行的Ceph存储群集
当前openstack环境与ceph集群环境在相同的网段中
OpenStack 使用 Ceph 作为后端存储可以带来以下好处:
在生产环境中,我们经常能够看见将 Nova、Cinder、Glance 与 Ceph RBD 进行对接。除此之外,还可以将 Swift、Manila 分别对接到 Ceph RGW 与 CephFS。Ceph 作为统一存储解决方案,有效降低了 OpenStack 云环境的复杂性与运维成本。
将三个重要的OpenStack服务:Cinder(块存储)、Glance(镜像)和Nova(虚拟机虚拟磁盘)与Ceph集成。
glance 作为openstack中镜像服务,支持多种适配器,支持将镜像存放到本地文件系统,http服务器,ceph分布式文件系统,glusterfs和sleepdog等开源的分布式文件系统上,本文,通过将讲述glance如何和ceph结合。
目前glance采用的是本地filesystem的方式存储,存放在默认的路径/var/lib/glance/images
下,当把本地的文件系统修改为分布式的文件系统ceph之后,原本在系统中镜像将无法使用,所以建议当前的镜像删除,部署好ceph之后,再统一上传至ceph中存储。
nova 负责虚拟机的生命周期管理,包括创建,删除,重建,开机,关机,重启,快照等,作为openstack的核心,nova负责IaaS中计算重要的职责,其中nova的存储格外重要,默认情况下,nova将instance的数据存放在/var/lib/nova/instances/%UUID目录下,使用本地的存储空间。使用这种方式带来的好处是:简单,易实现,速度快,故障域在一个可控制的范围内。然而,缺点也非常明显:compute出故障,上面的虚拟机down机时间长,没法快速恢复,此外,一些特性如热迁移live-migration,虚拟机容灾nova evacuate等高级特性,将无法使用,对于后期的云平台建设,有明显的缺陷。对接 Ceph 主要是希望将实例的系统磁盘文件储存到 Ceph 集群中。与其说是对接 Nova,更准确来说是对接 QEMU-KVM/libvirt,因为 librbd 早已原生集成到其中。
Cinder 为 OpenStack 提供卷服务,支持非常广泛的后端存储类型。对接 Ceph 后,Cinder 创建的 Volume 本质就是 Ceph RBD 的块设备,当 Volume 被虚拟机挂载后,Libvirt 会以 rbd 协议的方式使用这些 Disk 设备。除了 cinder-volume 之后,Cinder 的 Backup 服务也可以对接 Ceph,将备份的 Image 以对象或块设备的形式上传到 Ceph 集群。
使用ceph的rbd接口,需要通过libvirt,所以需要在客户端机器上安装libvirt和qemu,关于ceph和openstack结合的结构如下,同时,在openstack中,需要用到存储的地方有三个:
/var/lib/glance/images
目录下,/var/lib/nova/instances
目录下,在cephnode01管理节点上操作
CEPH PG数量设置与详细介绍
在创建池之前要设置一下每个OSD的最大PG 数量
https://ceph.com/releases/v12-2-1-luminous-released/
[root@cephnode01 my-cluster]# vim ceph.conf
[global]
mon_max_pg_per_osd = 300
#将修改的配置push到集群中其他节点
ceph-deploy --overwrite-conf config push cephnode01 cephnode02 cephnode03
#重启ceph-mgr服务使参数修改生效
systemctl restart ceph-mgr.target
1、ceph默认创建了一个pool池为rbd
[root@cephnode01 my-cluster]# ceph osd lspools
rbd
-----------------------------------
2、为 Glance、Nova、Cinder 创建专用的 RBD Pools,并格式化
# glance-api
ceph osd pool create images 64 64
rbd pool init images
# cinder-volume
ceph osd pool create volumes 64 64
rbd pool init volumes
# cinder-backup [可选]
ceph osd pool create backups 64 64
rbd pool init backups
# nova-compute
ceph osd pool create vms 64 64
rbd pool init vms
-----------------------------------
3、查看pool的pg_num和pgp_num大小
ceph osd pool get [pool_sname] pg_num
pg_num: 64
ceph osd pool get [pool_sname] pgp_num
pgp_num: 64
-----------------------------------
4、查看ceph中的pools;忽略之前创建的pool
[root@cephnode01 my-cluster]# ceph osd lspools
...
12 volumes
13 images
14 backups
15 vms
[root@cephnode01 my-cluster]# ceph osd pool stats
...
pool volumes id 12
nothing is going on
pool images id 13
nothing is going on
pool backups id 14
nothing is going on
pool vms id 15
nothing is going on
#glance-api,cinder-volume,nova-compute和 cinder-backup;
cat >>/etc/hosts << EOF
192.168.0.10 controller
192.168.0.20 computel01
192.168.0.40 cinder01
EOF
-----------------------------------
#配置下发ssh秘钥远程连接
for ip in 10 20 40 ;do sshpass -p123456 ssh-copy-id -o StrictHostKeyChecking=no 192.168.0.$ip ;done
-----------------------------------
#将ceph的yum源下发到openstack各个客户端节点
for ip in 10 20 40 ;do sshpass -p123456 scp -rp /etc/yum/ 192.168.0.$ip ;done
yum makecache
#glance、nova、cinder作为ceph的客户端,需要有ceph的配置文件
[root@cephnode01 my-cluster]# ceph-deploy install controller computel01 cinder01
[root@cephnode01 my-cluster]# ceph-deploy --overwrite-conf admin controller computel01 cinder01
[root@cephnode01 ~]# scp /etc/ceph/ceph.conf root@controller:/etc/ceph/ceph.conf
#确保在controller节点上的glance安装了python-rbd
[root@controller ~]# rpm -qa python-rbd
python-rbd-14.2.9-0.el7.x86_64
#确保在computel01节点上的nova、cinder01节点上cinder都安装了ceph-common
[root@computel01 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
[root@cinder01 ~]# rpm -qa | grep ceph-common
ceph-common-14.2.9-0.el7.x86_64
通过cephx为Glance、cinder、cinder-backup创建用户
#[glance-api]
[root@cephnode01 ~]# ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images'
[client.glance]
key = AQAlXbZe0tNeKhAAjR9ltLbEVwuBucdfq6y0qg==
#[cinder-volume]
[root@cephnode01 ~]# ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'
[client.cinder]
key = AQDvfbZeBt/hMBAAJzGnZFq6/0gqn3Gb9Y8Rlw==
#[cinder-backup]
[root@cephnode01 ~]# ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups'
[client.cinder-backup]
key = AQAWfrZekCjyLxAAnKfEZsGa6rsfXHE7dv3q8Q==
查看认证列表
[root@cephnode01 ~]# ceph auth list
installed auth entries:
client.glance #glance连接ceph的认证信息
key: AQAlXbZe0tNeKhAAjR9ltLbEVwuBucdfq6y0qg==
caps: [mon] profile rbd
caps: [osd] profile rbd pool=images
client.cinder #cinder连接ceph的认证信息
key: AQDvfbZeBt/hMBAAJzGnZFq6/0gqn3Gb9Y8Rlw==
caps: [mon] profile rbd
caps: [osd] profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images
client.cinder-backup #cinder-backup连接ceph的认证信息
key: AQAWfrZekCjyLxAAnKfEZsGa6rsfXHE7dv3q8Q==
caps: [mon] profile rbd
caps: [osd] profile rbd pool=backups
为glance、cinder和cinder-backup用户创建keyring文件
# ceph auth get-or-create client.glance | ssh root@controller sudo tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQAlXbZe0tNeKhAAjR9ltLbEVwuBucdfq6y0qg==
# ceph auth get-or-create client.cinder | ssh root@cinder01 sudo tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key: AQDvfbZeBt/hMBAAJzGnZFq6/0gqn3Gb9Y8Rlw==
# ceph auth get-or-create client.cinder-backup | ssh root@cinder01 sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
[client.cinder-backup]
key: AQAWfrZekCjyLxAAnKfEZsGa6rsfXHE7dv3q8Q==
**允许以OpenStack各用户权限访问Ceph集群 **
ssh root@controller sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@cinder01 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@cinder01 sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
为nova节点创建keyring文件
#controller节点
ceph auth get-key client.cinder | ssh root@controller tee /etc/ceph/client.cinder.key
#computel01节点
ceph auth get-key client.cinder | ssh root@computel01 tee /etc/ceph/client.cinder.key
为nova上创建client.cinder用户的访问秘钥并添加到Libvirt守护进程
计算节点上的 Libvirt 进程在挂载或卸载一个由 Cinder 提供的 Volume 时需要访问 Ceph Cluster。所以需要创建 client.cinder 用户的访问秘钥并添加到 Libvirt 守护进程**。注意,在Controller和Computel01节点上都运行着nova-compute服务,所以在两个节点上的 Libvirt 都要添加**,而且都要添加同一个 Secret秘钥,即 cinder-volume 使用的 Secret秘钥。
需要将client.cinder
用户的密钥存储在libvirt
中 。libvirt进程需要它从Cinder附加块设备时访问集群
#生成随机 UUID,作为 Libvirt 秘钥的唯一标识
#只需要生成一次,所有的 cinder-volume、nova-compute 都是用同一个 UUID。
[root@computel01 ~]# uuidgen
1c305557-2373-4e4b-a232-0e2a540cb5bb
-----------------------------------
#在两个nova计算节点上都执行下面的操作(controller/computel01)
#如果没有virsh命令则下载libvirt
yum -y install libvirt -y
systemctl restart libvirtd
systemctl enable libvirtd
-----------------------------------
#创建Libvirt秘钥文件
cat > /etc/ceph/secret.xml <<EOF
1c305557-2373-4e4b-a232-0e2a540cb5bb
client.cinder secret
EOF
-----------------------------------
#定义一个Libvirt秘钥
[root@computel01 ~]# sudo virsh secret-define --file /etc/ceph/secret.xml
Secret 1c305557-2373-4e4b-a232-0e2a540cb5bb created
-----------------------------------
#设置秘钥的值为client.cinder用户的key,Libvirt凭此key就能以Cinder的用户访问Ceph Cluster
[root@computel01 ~]# sudo virsh secret-set-value --secret 1c305557-2373-4e4b-a232-0e2a540cb5bb --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set
-----------------------------------
#查看每台nova上的秘钥清单
[root@computel01 ~]# sudo virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
1c305557-2373-4e4b-a232-0e2a540cb5bb ceph client.cinder secret
[root@controller ~]# sudo virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
1c305557-2373-4e4b-a232-0e2a540cb5bb ceph client.cinder secret
Glance 为 OpenStack 提供镜像及其元数据注册服务,Glance 支持对接多种后端存储。与 Ceph 完成对接后,Glance 上传的 Image 会作为块设备储存在 Ceph 集群中。新版本的 Glance 也开始支持 enabled_backends 了,可以同时对接多个存储提供商。
写时复制技术
内核只为新生成的子进程创建虚拟空间结构,它们复制于父进程的虚拟空间结构,但是不为这些段分配物理内存,它们共享父进程的物理空间,当父子进程中有更改相应的段的行为发生时,再为子进程相应的段分配物理空间。写时复制技术大大降低了进程对资源的浪费。
[root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak2
添加以下[glance_store]部分;连接至ceph
如果要启用映像的写时复制克隆,请在以下[DEFAULT]部分下添加
[root@controller ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
show_image_direct_url = True #启用镜像的写时复制
[glance_store]
## Local File #注释安装时默认配置
# stores = file,http
# default_store = file
# filesystem_store_datadir = /var/lib/glance/images/
## Ceph RBD
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
[paste_deploy]
flavor = keystone
[root@controller ~]# systemctl restart openstack-glance-api.service
lsof -i:9292
对接 Ceph 之后,通常会以 RAW 格式创建 Glance Image,而不再使用 QCOW2 格式,否则创建虚拟机时需要进行镜像复制,没有利用 Ceph RBD COW 的优秀特性。
QEMU和块设备
#从QEMU中检索块设备映像信息
[root@controller tools]# qemu-img info cirros-0.5.1-x86_64-disk.img
image: cirros-0.5.1-x86_64-disk.img
file format: qcow2
virtual size: 112M (117440512 bytes)
disk size: 16M
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
-----------------------------------
#将镜像从qcow2格式转换为raw格式
[root@controller tools]# qemu-img convert -f qcow2 -O raw cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw
[root@controller tools]# ls
cirros-0.5.1-x86_64-disk.img
cirros-0.5.1-x86_64-disk.raw
[root@controller tools]# qemu-img info cirros-0.5.1-x86_64-disk.raw
image: cirros-0.5.1-x86_64-disk.raw
file format: raw
virtual size: 112M (117440512 bytes)
disk size: 17M
-----------------------------------
#上传镜像;查看glance和ceph联动情况
[root@controller tools]# openstack image create --container-format bare --disk-format raw --file cirros-0.5.1-x86_64-disk.raw --unprotected --public cirros_raw
+------------------+------------------------------------------------
| Field | Value
+------------------+------------------------------------------------
| checksum | 01e7d1515ee776be3228673441d449e6
| container_format | bare
| created_at | 2020-05-09T08:44:41Z
| disk_format | raw
| file | /v2/images/fd44ac54-1e77-4612-86c2-362c900a715a/file
| id | fd44ac54-1e77-4612-86c2-362c900a715a
| min_disk | 0
| min_ram | 0
| name | cirros_raw
| owner | 5776e47671b1429e957ff78e667397c4
| properties | direct_url='rbd://a4c42290-00ac-4647...
| protected | False
| schema | /v2/schemas/image
| size | 117440512
| status | active
| tags |
| updated_at | 2020-05-09T08:46:06Z
| virtual_size | None
| visibility | public
+------------------+--------------------------------------------
[root@cephnode01 my-cluster]# rbd ls images
fd44ac54-1e77-4612-86c2-362c900a715a
-----------------------------------
[root@cephnode01 my-cluster]# rbd info images/fd44ac54-1e77-4612-86c2-362c900a715a
rbd image 'fd44ac54-1e77-4612-86c2-362c900a715a':
size 112 MiB in 14 objects
order 23 (8 MiB objects)
snapshot_count: 1
id: 4f4363506f69c
block_name_prefix: rbd_data.4f4363506f69c
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat May 9 16:44:48 2020
access_timestamp: Sat May 9 16:44:48 2020
modify_timestamp: Sat May 9 16:45:49 2020
-----------------------------------
[root@cephnode01 my-cluster]# rbd snap ls images/fd44ac54-1e77-4612-86c2-362c900a715a
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 snap 112 MiB yes Sat May 9 16:46:03 2020
-----------------------------------
[root@cephnode01 my-cluster]# rbd info images/fd44ac54-1e77-4612-86c2-362c900a715a@snap
-----------------------------------
[root@cephnode01 my-cluster]# rados ls -p images
rbd_id.fd44ac54-1e77-4612-86c2-362c900a715a #glance中的数据存储到了ceph文件系统中
rbd_data.4f4363506f69c.0000000000000002
rbd_data.4f4363506f69c.0000000000000007
rbd_data.4f4363506f69c.0000000000000001
rbd_data.4f4363506f69c.000000000000000c
rbd_data.4f4363506f69c.0000000000000006
rbd_header.4f4363506f69c
rbd_directory
rbd_data.4f4363506f69c.0000000000000004
rbd_info
rbd_data.4f4363506f69c.000000000000000d
rbd_data.4f4363506f69c.000000000000000b
rbd_data.4f4363506f69c.0000000000000000
rbd_object_map.4f4363506f69c.0000000000000004
rbd_object_map.4f4363506f69c
rbd_data.4f4363506f69c.0000000000000005
rbd_data.4f4363506f69c.0000000000000009
rbd_data.4f4363506f69c.0000000000000008
rbd_data.4f4363506f69c.000000000000000a
rbd_data.4f4363506f69c.0000000000000003
当创建一个 raw 格式的 Glance Image 时,在 Ceph 中实际执行了一下步骤:
rbd -p ${GLANCE_POOL} create --size ${SIZE} ${IMAGE_ID}
rbd -p ${GLANCE_POOL} snap create ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} snap protect ${IMAGE_ID}@snap
当删除一个 raw 格式的 Glance Image 时,在 Ceph 中实际执行了一下步骤:
rbd -p ${GLANCE_POOL} snap unprotect ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} snap rm ${IMAGE_ID}@snap
rbd -p ${GLANCE_POOL} rm ${IMAGE_ID}
总结
将openstack的glance的数据存储到ceph中是一种非常好的解决方案,既能够保障镜像数据的安全性,同时glance和nova在同个存储池中,能够基于copy-on-write(写时复制)的方式快速创建虚拟机,能够在秒级为单位实现vm的创建。
glance-api
的配置文件,以便于恢复[root@cinder01 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak2
/etc/cinder/cinder.conf
配置文件[root@cinder01 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
...
enabled_backends = ceph
glance_api_version = 2
glance_api_servers = http://controller:9292
#[lvm] #注释掉默认的
#volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
#volume_backend_name = lvm
#volume_group = cinder-volumes
#iscsi_protocol = iscsi
#iscsi_helper = lioadm
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
#用于cephx的身份验证
rbd_user = cinder
#cinder用户访问ceph集群所使用的Secret UUID
rbd_secret_uuid = 1c305557-2373-4e4b-a232-0e2a540cb5bb
[root@cinder01 ~]# systemctl restart openstack-cinder-volume
#到openstack的控制节点查看
[root@controller ~]# openstack volume service list
cinder备份服务需要安装对象存储服务;这里未配置对象存储。所以可忽略此步骤
https://docs.openstack.org/cinder/train/install/cinder-backup-install-rdo.html
[root@cinder01 ~]# vim /etc/cinder/cinder.conf
[DEFAULT]
...
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
#启动cinder-backup服务
systemctl enable openstack-cinder-backup.service
systemctl restart openstack-cinder-backup.service
#创建一个RBD Type Volume的卷类型(如果默认lvm卷开启,可以创建一个本地的lvm卷类型)
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack volume type create --public --property volume_backend_name="ceph" ceph_rbd
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | None |
| id | 6242e7fd-9f5e-4d30-9635-b5af59db6a8c |
| is_public | True |
| name | ceph_rbd |
| properties | volume_backend_name='ceph' |
+-------------+--------------------------------------+
#查看卷类型
[root@controller ~]# openstack volume type list
+--------------------------------------+----------------+-----------+
| ID | Name | Is Public |
+--------------------------------------+----------------+-----------+
| 6242e7fd-9f5e-4d30-9635-b5af59db6a8c | ceph_rbd | True |
| a8faf593-aadd-4af7-b5fd-5f405641710e | VolumeType_web | True |
| 11be6491-4272-4416-92f0-f4286735901e | __DEFAULT__ | True |
+--------------------------------------+----------------+-----------+
#在demo用户下创建一个1GB的卷
[root@controller ~]# source demo-openrc
[root@controller ~]# openstack volume create --type ceph_rbd --size 1 ceph_rbd_volume1
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2020-05-11T07:52:49.000000 |
| description | None |
| encrypted | False |
| id | 7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 |
| multiattach | False |
| name | ceph_rbd_volume1 |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | ceph_rbd |
| updated_at | None |
| user_id | b9649ec199ce402aabc4bbfd4ca00144 |
+---------------------+--------------------------------------+
#查看创建好的卷(volume1卷为opensttack环境中默认创建的lvm卷类型)
[root@controller ~]# openstack volume list
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| 7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 | ceph_rbd_volume1 | available | 1 | |
| 75011e60-33fc-4061-98dc-7028e477efc9 | volume1 | in-use | 1 | Attached to selfservice-vm1 on /dev/vdb |
+--------------------------------------+------------------+-----------+------+------------------------------------------+
Pool images
的 Objects
信息[root@cephnode01 my-cluster]# rbd ls volumes
volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5
[root@cephnode01 my-cluster]# rbd info volumes/volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5
rbd image 'volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 828ff738f5ca2
block_name_prefix: rbd_data.828ff738f5ca2
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon May 11 15:52:53 2020
access_timestamp: Mon May 11 15:52:53 2020
modify_timestamp: Mon May 11 15:52:53 2020
#查看volumes的pool池
[root@cephnode01 my-cluster]# rados ls -p volumes
rbd_id.volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5
rbd_directory
rbd_info
rbd_object_map.828ff738f5ca2
rbd_header.828ff738f5ca2
创建一个空白 Volume,相当于执行了以下指令
rbd -p ${CINDER_POOL} create --new-format --size ${SIZE} volume-${VOLUME_ID}
从镜像创建 Volume 的时候应用了 Ceph RBD COW Clone 功能,这是通过glance-api.conf [DEFAULT] show_image_direct_url = True
来开启。这个配置项的作用是持久化 Image 的 location,此时 Glance RBD Driver 才可以通过 Image location 执行 Clone 操作。并且还会根据指定的 Volume Size 来调整 RBD Image 的 Size。
[root@controller ~]# openstack image list
fd44ac54-1e77-4612-86c2-362c900a715a : cirros_raw
[root@controller ~]# openstack volume create --image fd44ac54-1e77-4612-86c2-362c900a715a --type ceph_rbd --size 1 cirros_raw_image
[root@controller ~]# openstack volume list
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| 0290e529-943f-49ef-ab1e-d016fa981c68 | cirros_raw_image | available | 1 | |
| 7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 | ceph_rbd_volume1 | available | 1 | |
| 75011e60-33fc-4061-98dc-7028e477efc9 | volume1 | in-use | 1 | Attached to selfservice-vm1 on /dev/vdb |
+--------------------------------------+------------------+-----------+------+------------------------------------------+
#查看Pool images的Objects信息
[root@cephnode01 my-cluster]# rbd ls volumes
volume-0290e529-943f-49ef-ab1e-d016fa981c68 # cirros_raw_image
volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 # ceph_rbd_volume1
[root@cephnode01 my-cluster]# rbd info volumes/volume-0290e529-943f-49ef-ab1e-d016fa981c68
rbd image 'volume-0290e529-943f-49ef-ab1e-d016fa981c68':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 740ae31c440b3
block_name_prefix: rbd_data.740ae31c440b3
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon May 11 16:11:36 2020
access_timestamp: Mon May 11 16:11:36 2020
modify_timestamp: Mon May 11 16:11:36 2020
parent: images/fd44ac54-1e77-4612-86c2-362c900a715a@snap
overlap: 112 MiB
[root@cephnode01 my-cluster]# rados ls -p volumes
rbd_object_map.740ae31c440b3
rbd_id.volume-7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5
rbd_directory
rbd_children
rbd_info
rbd_object_map.828ff738f5ca2
rbd_id.volume-0290e529-943f-49ef-ab1e-d016fa981c68
rbd_header.740ae31c440b3
rbd_header.828ff738f5ca2
从镜像创建一个 Volume,相当于执行了以下指令
rbd clone ${GLANCE_POOL}/${IMAGE_ID}@snap ${CINDER_POOL}/volume-${VOLUME_ID}
if [[ -n "${SIZE}" ]]; then
rbd resize --size ${SIZE} ${CINDER_POOL}/volume-${VOLUME_ID}
fi
[root@controller ~]# openstack volume snapshot create --volume cirros_raw_image cirros_raw_image-snap01
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| created_at | 2020-05-11T08:24:34.805571 |
| description | None |
| id | 769938a7-3fa7-4918-9051-a2d71e1cc1d4 |
| name | cirros_raw_image-snap01 |
| properties | |
| size | 1 |
| status | creating |
| updated_at | None |
| volume_id | 0290e529-943f-49ef-ab1e-d016fa981c68 |
+-------------+--------------------------------------+
#查看快照列表
[root@controller ~]# openstack volume snapshot list
+--------------------------------------+-------------------------+-------------+-----------+------+
| ID | Name | Description | Status | Size |
+--------------------------------------+-------------------------+-------------+-----------+------+
| 769938a7-3fa7-4918-9051-a2d71e1cc1d4 | cirros_raw_image-snap01 | None | available | 1 |
+--------------------------------------+-------------------------+-------------+-----------+------+
#在ceph上查看cirros_raw_image卷创建的快照
[root@cephnode01 my-cluster]# rbd snap ls volumes/volume-0290e529-943f-49ef-ab1e-d016fa981c68
SNAPID NAME SIZE PROTECTED TIMESTAMP
4 snapshot-769938a7-3fa7-4918-9051-a2d71e1cc1d4 1 GiB yes Mon May 11 16:24:36 2020
#查看快照详细信息
[root@cephnode01 my-cluster]# rbd info volumes/volume-0290e529-943f-49ef-ab1e-d016fa981c68@snapshot-769938a7-3fa7-4918-9051-a2d71e1cc1d4
rbd image 'volume-0290e529-943f-49ef-ab1e-d016fa981c68':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 1
id: 740ae31c440b3
block_name_prefix: rbd_data.740ae31c440b3
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Mon May 11 16:11:36 2020
access_timestamp: Mon May 11 16:11:36 2020
modify_timestamp: Mon May 11 16:11:36 2020
protected: True
parent: images/fd44ac54-1e77-4612-86c2-362c900a715a@snap
overlap: 112 MiB
对一个 Volume 执行快照,就相当于执行了以下指令
rbd -p ${CINDER_POOL} snap create volume-${VOLUME_ID}@snapshot-${SNAPSHOT_ID}
rbd -p ${CINDER_POOL} snap protect volume-${VOLUME_ID}@snapshot-${SNAPSHOT_ID}
如果说快照时一个时间机器,那么备份就是一个异地的时间机器,它具有容灾的含义。所以一般来说 Ceph Pool backup 应该与 Pool images、volumes 以及 vms 处于不同的灾备隔离域。
一般的,备份具有以下类型:
#未测试
Nova是OpenStack中的计算服务。 Nova存储与默认的运行虚拟机相关联的虚拟磁盘镜像,在
/var/lib/nova/instances/%UUID
目录下。Ceph是可以直接与Nova集成的存储后端之一。
在虚拟磁盘映像的计算节点上使用本地存储有一些缺点:
Nova 为 OpenStack 提供计算服务,对接 Ceph 主要是希望将实例的系统磁盘文件储存到 Ceph 集群中。与其说是对接 Nova,更准确来说是对接QEMU-KVM/libvirt
,因为 librbd 早已原生集成到其中。
修改在每个计算节点上的 Ceph Client 配置,启用 RBD 客户端缓存和管理 Socket,有助于提升性能和便于查看故障日志
#controller、computel01
#创建以下路径的和权限
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/
#编辑配置文件
vim /etc/ceph/ceph.conf
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
#Controller节点上的nova.conf配置文件
[root@controller ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 1c305557-2373-4e4b-a232-0e2a540cb5bb
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
-----------------------------------
#computel01节点上的nova.conf配置文件
[root@computel01 ~]# vim /etc/nova/nova.conf
[libvirt]
virt_type = qemu
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 1c305557-2373-4e4b-a232-0e2a540cb5bb
disk_cachemodes="network=writeback"
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
#重启计算节点的nova计算服务
[root@computel01 ~]# systemctl restart openstack-nova-compute.service
systemctl restart openstack-glance-api
systemctl restart openstack-nova-compute
systemctl restart openstack-cinder-volume
systemctl restart openstack-cinder-backup#暂不启动
#查看网络
[root@controller ~]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-------------+--------------------------------------+
| 926859eb-1e48-44ed-9634-bcabba5eb8b8 | provider | afa6ca79-fe10-4ada-967a-dad846b69712 |
| a7acab4d-3d4b-41f8-8d2c-854fb1ff6d4f | selfservice | bf3c165c-384d-4803-b07e-f5fd0b30415b |
+--------------------------------------+-------------+--------------------------------------+
[root@controller ~]# openstack image list
+--------------------------------------+---------------------------+--------+
| ID | Name | Status |
+--------------------------------------+---------------------------+--------+
| fd44ac54-1e77-4612-86c2-362c900a715a | cirros_raw | active |
+--------------------------------------+---------------------------+--------+
[root@controller ~]# openstack flavor list
| 0 | m1.nano | 128 | 1 | 0 | 1 | True |
| 1 | m2.nano | 1024 | 2 | 0 | 1 | True |
#创建实例:如果报错可在dashboard界面上进行创建
[root@controller ~]# openstack server create --image fd44ac54-1e77-4612-86c2-362c900a715a --flavor m2.nano \--nic net-id=a7acab4d-3d4b-41f8-8d2c-854fb1ff6d4f --security-group default --key-name mykey selfservice-cirros_raw
+-----------------------------+---------------------------------------------------+
| Field | Value |
+-----------------------------+---------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | yqKV4tVNMPah |
| config_drive | |
| created | 2020-05-11T09:32:47Z |
| flavor | m2.nano (1) |
| hostId | |
| id | 05621371-203f-466e-80d0-ea0e49cc6660 |
| image | cirros_raw (fd44ac54-1e77-4612-86c2-362c900a715a) |
| key_name | mykey |
| name | selfservice-cirros_raw |
| progress | 0 |
| project_id | 6535a5a0ef0c4e9caa42912d02bd7d54 |
| properties | |
| security_groups | name='d47cab9e-1a97-44ee-9ce6-2311ec5344b2' |
| status | BUILD |
| updated | 2020-05-11T09:34:11Z |
| user_id | b9649ec199ce402aabc4bbfd4ca00144 |
| volumes_attached | |
+-----------------------------+--------------------------------------------------+
#将卷附加到selfservice-cirros_raw实例,可以在dashboard界面操作
#先查看卷
[root@controller ~]# openstack volume list
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------------+-----------+------+------------------------------------------+
| 0290e529-943f-49ef-ab1e-d016fa981c68 | cirros_raw_image | available | 1 | |
| 7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 | ceph_rbd_volume1 | available | 1 | |
#查看实例详情
[root@controller ~]# openstack server show selfservice-cirros_raw
+-----------------------------+----------------------------------------------------------+
| Field | Value |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | AUTO |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2020-05-11T09:48:43.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | selfservice=172.18.1.99, 192.168.0.197 |
| config_drive | |
| created | 2020-05-11T09:47:07Z |
| flavor | m1.nano (0) |
| hostId | d7f29c07fd62aea996db001357a7e214724779ce8674bc5990567fd0 |
| id | b61f5d6d-f0c8-410e-9d0f-7fb378632481 |
| image | |
| key_name | mykey |
| name | selfservice-cirros_raw |
| progress | 0 |
| project_id | 6535a5a0ef0c4e9caa42912d02bd7d54 |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2020-05-12T01:40:21Z |
| user_id | b9649ec199ce402aabc4bbfd4ca00144 |
| volumes_attached | id='eb408878-698b-4329-8d4e-9f40ffbeaf0b' |
+-----------------------------+----------------------------------------------------------+
#将卷附加到selfservice-cirros_raw实例,可以在dashboard界面操作
[root@controller ~]# openstack server add volume selfservice-cirros_raw ceph_rbd_volume1
[root@controller ~]# openstack volume list
+--------------------------------------+------------------+-----------+------+-------------------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+------------------+-----------+------+-------------------------------------------------+
| eb408878-698b-4329-8d4e-9f40ffbeaf0b | | in-use | 1 | Attached to selfservice-cirros_raw on /dev/vda |
| 0290e529-943f-49ef-ab1e-d016fa981c68 | cirros_raw_image | available | 1 | |
| 7b6d5d82-7b5e-4d44-ba11-c17769a9bdf5 | ceph_rbd_volume1 | in-use | 1 | Attached to selfservice-cirros_raw on /dev/vdb |