参考 https://www.cnblogs.com/CloudMan6/p/5585637.html
cinder-api
cinder-api 是整个Cinder 组件的门户,所有cinder 的请求都首先由 cinder-api 处理。cinder-api 向外界暴露若干 HTTP REST API 接口。在 keystone 中我们可以查询 cinder-api 的Endpoints。
[stack@DevStack-Rocky-Controller-31 ~]$ openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+
| 047e022b05b7470c85ea5bb7229a56d8 | RegionOne | cinderv2 | volumev2 | True | public | http://10.12.30.31/volume/v2/$(project_id)s |
| 0fb43b56acd44139a601352ad8576c6a | RegionOne | glance | image | True | public | http://10.12.30.31/image |
| 19bb7e53164b4ec6a1e4d4c264bb586d | RegionOne | barbican | key-manager | True | internal | http://10.12.30.31/key-manager |
| 1c3ac7b978234b8ca994f261ac0dd4e4 | RegionOne | mistral | workflowv2 | True | admin | http://10.12.30.31:8989/v2 |
| 1e53caf2fe854ef3add9d226b051e034 | RegionOne | cinder | volume | True | public | http://10.12.30.31/volume/v1/$(project_id)s |
| 21203ae872ea4e078161b7e44513a322 | RegionOne | nova_legacy | compute_legacy | True | public | http://10.12.30.31/compute/v2/$(project_id)s |
| 22fd0fada43b440cb6b3c609fb4f6ec3 | RegionOne | barbican | key-manager | True | public | http://10.12.30.31/key-manager |
| b83698fcabb24bd0b04ea988883f10c1 | RegionOne | cinder | block-storage | True | public | http://10.12.30.31/volume/v3/$(project_id)s |
| c894d8a5bb0445be9f69adb8eda117d2 | RegionOne | mistral | workflowv2 | True | internal | http://10.12.30.31:8989/v2 |
| cce8033429c7447a992f9df313aafeaa | RegionOne | neutron | network | True | public | http://10.12.30.31:9696/ |
| d685b7c6fd1b4a4d8488efecd8bb9bbc | RegionOne | keystone | identity | True | public | http://10.12.30.31/identity |
| d9ddc1876bb24dd8add95c113854ccef | RegionOne | nova | compute | True | public | http://10.12.30.31/compute/v2.1 |
| dfc13044d3cd46d7a2b60d0bab442013 | RegionOne | cinderv3 | volumev3 | True | public | http://10.12.30.31/volume/v3/$(project_id)s |
| e4b6d476f0bc4da5bc9d977c8517afbe | RegionOne | mistral | workflowv2 | True | public | http://10.12.30.31:8989/v2 |
| eea21224c1af4e1d9ebdba70541ad49d | RegionOne | keystone | identity | True | admin | http://10.12.30.31/identity |
| f1963fa6fda64efb8dbc823571a25724 | RegionOne | placement | placement | True | public | http://10.12.30.31/placement |
+----------------------------------+-----------+--------------+----------------+---------+-----------+----------------------------------------------+
客户端可以将请求发送到 Endpoint 指定的地址,向 cinder-api 请求操作。当然,作为最终用户的为我们不会直接发送 REST API 请求。OpenStack CLI,Dashboard 和其他需要跟Cinder 交换的组件会使用这些API。
cinder-api 对接收到的 HTTP API 请求会做如下处理:
1、检查客户端传入的参数是否合法有效
2、调用cinder 的其他子服务处理客户端请求
3、将cinder 其他子服务返回的结果序列号返回给客户端
cinder-api 接受那些请求呢?简单的说,只要是volume生命周期相关的操作,cinder-api都可以响应。大部分操作都可以在Dashboard上看到。
打开volume管理界面,点击下拉菜单,列表中就是cinder-api可执行的操作。
cinder-scheduler
创建volume 时,cinder-scheduler 会基于容量、volume type 等条件选择出最合适的存储节点,然后让其创建volume。
cinder-volume
cinder-volume 在存储节点上运行,OpenStack 对volume 的操作,最后都是交给cinder-volume来完成的。cinder-volume自身并不管理真正的存储设备,存储设备是由volume provider 管理的。cinder-volume 与 volume provider 一起实现volume生命周期的管理。
通过Driver架构支持多种 volume provider
接着的问题是:现在市面上有这么多块存储产品和方案(volume provider),cinder-volume如何与他们配合呢?
这就是我们之前讨论过的Driver架构。cinder-volume为这些volume provider定义了统一的接口,volume provider只需要实现这些接口,就可以Driver的形式即插即用到OpenStack系统中。下面是Cinder Driver 的架构示意图:
我们可以在 /opt/stack/cinder/cinder/volume/drivers/ 目录下查看到OpenStack源代码中已经自带了很多 volume provider 的 Driver
[stack@DevStack-Rocky-Controller-31 ~]$ ls /opt/stack/cinder/cinder/volume/drivers/
coprhd ibm nfs.py synology
datacore infinidat.py nimble.py tintri.py
datera __init__.py prophetstor veritas
dell_emc __init__.pyc pure.py veritas_access
disco inspur qnap.py veritas_cnfs.py
dothill kaminario quobyte.py vmware
drbdmanagedrv.py lenovo rbd.py vzstorage.py
fujitsu lvm.py remotefs.py windows
fusionstorage lvm.pyc san zadara.py
hgst.py nec sheepdog.py zfssa
hpe netapp solidfire.py
huawei nexenta storpool.py
存储节点在配置文件配置文件 /etc/cinder/cinder.conf 中,volume_driver 配置项设置该存储节点使用哪种 volume provider的driver。下面的示例表示使用的是LVM。
[stack@DevStack-Rocky-Controller-31 ~]$ cat /etc/cinder/cinder.conf | grep '_dri'
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
定期向OpenStack报告计算节点的状态
在前面cinder-scheduler会用到 CapacityFilter 和 CapacityWeigher,他们都是通过存储节点的空闲容量来做筛选。那么这里有个问题:Cinder是如何得知每个存储节点的空闲容量信息呢?
答案是:cinder-volume 会定期向Cinder报告
从cinder-volume 的日志中可以发现每隔一段时间,cinder-volume就会报告当前存储节点的资源使用情况。
提炼后的日志如下:
oslo_service.periodic_task Running periodic task VolumeManager.publish_service_capabilities
cinder.volume.drivers.lvm Updating volume stats
oslo_concurrency.processutils Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix stack-volumes-lvmdriver-1
oslo_concurrency.processutils CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix stack-volumes-lvmdriver-1" returned: 0 in 0.240s
oslo_concurrency.processutils Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1
oslo_concurrency.processutils CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1" returned: 0 in 0.235s
oslo_concurrency.processutils Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o size,data_percent --separator : --nosuffix /dev/stack-volumes-lvmdriver-1/stack-volumes-lvmdriver-1-pool
oslo_concurrency.processutils CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o size,data_percent --separator : --nosuffix /dev/stack-volumes-lvmdriver-1/stack-volumes-lvmdriver-1-pool" returned: 0 in 0.237s
oslo_concurrency.processutils Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1
oslo_concurrency.processutils CMD "sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1" returned: 0 in 0.250s
cinder.manager Notifying Schedulers of capabilities ...
日志中用来获取数据的命令执行结果如下:
[root@DevStack-Rocky-Compute-32 ~]# vgs --noheadings --unit=g -o name,size,free,lv_count,uuid --separator : --nosuffix stack-volumes-lvmdriver-1
stack-volumes-lvmdriver-1:24.00:1.15:3:Qy82rv-xCV5-fb1N-Bvks-c8Df-wxr8-dxwepB
[root@DevStack-Rocky-Compute-32 ~]# lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1
stack-volumes-lvmdriver-1 stack-volumes-lvmdriver-1-pool 22.80
stack-volumes-lvmdriver-1 volume-3ef7baa3-ef61-49b0-b31c-bc9f0c1312b3 1.00
stack-volumes-lvmdriver-1 volume-9ebeca17-3b09-4370-a9aa-8d708924b865 1.00
[root@DevStack-Rocky-Compute-32 ~]# lvs --noheadings --unit=g -o size,data_percent --separator : --nosuffix /dev/stack-volumes-lvmdriver-1/stack-volumes-lv
22.80:0.34
[root@DevStack-Rocky-Compute-32 ~]# lvs --noheadings --unit=g -o vg_name,name,size --nosuffix stack-volumes-lvmdriver-1
stack-volumes-lvmdriver-1 stack-volumes-lvmdriver-1-pool 22.80
stack-volumes-lvmdriver-1 volume-3ef7baa3-ef61-49b0-b31c-bc9f0c1312b3 1.00
stack-volumes-lvmdriver-1 volume-9ebeca17-3b09-4370-a9aa-8d708924b865 1.00
实现volume 生命周期管理
Cinder 对 volume 的生命周期的管理最终都是通过cinder-volume完成的,包括volume 的 create 、extend、attach、snapshot、delete 等。后面章节会详细学习。