在第一篇中,novaclient最终会向nova发出下面的HTTP POST request。
POST /e40722e5e0c74a0b878c595c0afab5fd/servers/6a17e64d-23c7-46a3-9812-8409ad215e40/os-volume_attachments
和下面参数:
Action: 'create', body: {"volumeAttachment": {"device": "/dev/vdc", "volumeId": "5fe8132e-f937-4c1b-8361-9984f94a7c28"}}
这里可以看到attach REST API的详细说明: http://api.openstack.org/api-ref-compute-v2-ext.html
nova-api启动
我们现在回头看看nova是如何启动web service来监听上述的http request。 在openstack nova的运行环境中,你会发现nova-api 这个进程。打开/usr/bin/nova-api文件,我们可以找到启动nova API服务的函数入口,在nova源代码目录/cmd/api.py文件中。 主要的处理流程如下:
1. 根据nova.conf中定义的enabled_apis变量,来启动相应的API服务。 例如,下面就说明要启动ec2 API。
enabled_apis = ec2,osapi_compute,metadata
2. 每个api 服务实际就是一个WSGIService对象的实例。server = service.WSGIService(api, use_ssl=should_use_ssl)
WSGI对象的初始化过程中,除了基本的wsgi.Server参数处理,还有import相应的Manager class。比如说:nova.conf文件中定义了Network manager class
network_manager = nova.network.manager.FlatDHCPManager
3. Launcher 一个服务,然后等待服务结束。函数调用次序: launcher.launch_service ==> self.services.add ==> self.tg.add_thread(self.run_service, service, self.done) => self.run_service, 即时启动线程来call service中定义的start函数。
481 @staticmethod 482 def run_service(service, done): 483 """Service start wrapper. 484 485 :param service: service to run 486 :param done: event to wait on until a shutdown is triggered 487 :returns: None 488 489 """ 490 service.start() 491 systemd.notify_once() 492 done.wait()
切换到 nova/service.py中class WSGService, start函数中主要四个call。依次是self.manager.init_host, self.manager.pre_start_hook, self.server.start, self,manager.post_start_hook. 其中,self.server就是__init__函数中建立的wsgiServer。 文件位于nova/wsgi.py。wsgiServer.start函数最终spawn一个WSGI app来处理接受和处理HTTP request。
nova-compute 启动
同理分析,同样的service启动次序。打开nova/cmd/compute.py,看看computer service是如何产生。
70 server = service.Service.create(binary='nova-compute', 71 topic=CONF.compute_topic, 72 db_allowed=CONF.conductor.use_local) 73 service.serve(server) 74 service.wait()
在Service的create类方法中,会实例化后端的computer manager (nova/computer/manager.py class ComputeManager)。 在ComputeManager的__init__构造函数中, 定义了computer RPC API接口。最后load comouter driver。 这个CONF.compute_driver必须在nova.conf文件里配置,告诉nova compiter的后端虚拟化软件到底用哪个。 (i.e compute_driver = libvirt.LibvirtDriver)
572 def __init__(self, compute_driver=None, *args, **kwargs): 573 """Load configuration options and connect to the hypervisor.""" 574 self.virtapi = ComputeVirtAPI(self) 575 self.network_api = network.API() 576 self.volume_api = volume.API() 577 self._last_host_check = 0 578 self._last_bw_usage_poll = 0 579 self._bw_usage_supported = True 580 self._last_bw_usage_cell_update = 0 581 self.compute_api = compute.API() 582 self.compute_rpcapi = compute_rpcapi.ComputeAPI() 583 self.conductor_api = conductor.API() 584 self.compute_task_api = conductor.ComputeTaskAPI() 599 self.driver = driver.load_compute_driver(self.virtapi, compute_driver)
然后运行start,进入了上面的四个call。 我们这里先看看self.manager.init_host
1045 def init_host(self): 1046 """Initialization for a standalone compute service.""" 1047 self.driver.init_host(host=self.host) 1048 context = nova.context.get_admin_context()
这里的driver对应的就是 libvirt.LibvirtDriver。init_host实质完成libvirt的初始化host的操作。
解析API
在nova/computer/api.py文件中,我们可以找到attach volume。nova client 的请求会转发到 RPC API. 因为nova-api服务负责处理REST请求,而nova组件之间的通信是通过RPC call。
2748 def _attach_volume(self, context, instance, volume_id, device, 2749 disk_bus, device_type): 2750 """Attach an existing volume to an existing instance. 2751 2752 This method is separated to make it possible for cells version 2753 to override it. 2754 """ ...... 2769 self.compute_rpcapi.attach_volume(context, instance=instance, 2770 volume_id=volume_id, mountpoint=device, bdm=volume_bdm) 2783 def attach_volume(self, context, instance, volume_id, device=None, 2784 disk_bus=None, device_type=None): 2785 """Attach an existing volume to an existing instance.""" 2786 # NOTE(vish): Fail fast if the device is not going to pass. This 2787 # will need to be removed along with the test if we 2788 # change the logic in the manager for what constitutes 2789 # a valid device. 2790 if device and not block_device.match_device(device): 2791 raise exception.InvalidDevicePath(path=device) 2792 return self._attach_volume(context, instance, volume_id, device, 2793 disk_bus, device_type)
而在nova/compute/rpcapi.py文件中,所有rpc client请求会remote call RPC server端的attach_volume.
326 def attach_volume(self, ctxt, instance, volume_id, mountpoint, bdm=None): 338 cctxt = self.client.prepare(server=_compute_host(None, instance), 339 version=version) 340 cctxt.cast(ctxt, 'attach_volume', **kw)
从上面的nova-computer启动分析看出,这里的cctxt.cast请求实际上是remote call ComputeManager中的attach_volume函数。这是computer instance是管理接口。RPC server端对应的处理函数在nova/computer/manager.py中。这时候,具体的attach virtual device工作才会交给后端的DriverVolumeBlockDevice.attach
4157 def attach_volume(self, context, volume_id, mountpoint, 4158 instance, bdm=None): 4159 """Attach a volume to an instance.""" 4160 if not bdm: 4161 bdm = block_device_obj.BlockDeviceMapping.get_by_volume_id( 4162 context, volume_id) 4163 driver_bdm = driver_block_device.DriverVolumeBlockDevice(bdm) 4164 try: 4165 return self._attach_volume(context, instance, driver_bdm) 4166 except Exception: 4167 with excutils.save_and_reraise_exception(): 4168 bdm.destroy(context) ... 4170 def _attach_volume(self, context, instance, bdm): 4171 context = context.elevated() 4172 LOG.audit(_('Attaching volume %(volume_id)s to %(mountpoint)s'), 4173 {'volume_id': bdm.volume_id, 4174 'mountpoint': bdm['mount_device']}, 4175 context=context, instance=instance) 4176 try: 4177 bdm.attach(context, instance, self.volume_api, self.driver, 4178 do_check_attach=False, do_driver_attach=True)
我们看看具体的virt/block_device.py 中的DriverVolumeBlockDevice class。
212 @update_db 213 def attach(self, context, instance, volume_api, virt_driver, 214 do_check_attach=True, do_driver_attach=False): 215 volume = volume_api.get(context, self.volume_id) # 根据volume id拿到volume object ...... 221 # 以LibvirtDriver为例, 拿到虚拟化后端对应的volume连接,并初始化。 222 connector = virt_driver.get_volume_connector(instance) 223 connection_info = volume_api.initialize_connection(context, 224 volume_id, 225 connector) ...... 229 # If do_driver_attach is False, we will attach a volume to an instance 230 # at boot time. So actual attach is done by instance creation code. 231 if do_driver_attach: 232 encryption = encryptors.get_encryption_metadata( 233 context, volume_api, volume_id, connection_info) 234 235 try: 236 virt_driver.attach_volume( 237 context, connection_info, instance, 238 self['mount_device'], disk_bus=self['disk_bus'], 239 device_type=self['device_type'], encryption=encryption) 240 except Exception: # pylint: disable=W0702 241 with excutils.save_and_reraise_exception(): 242 LOG.exception(_("Driver failed to attach volume " 243 "%(volume_id)s at %(mountpoint)s"), 244 {'volume_id': volume_id, 245 'mountpoint': self['mount_device']}, 246 context=context, instance=instance) 247 volume_api.terminate_connection(context, volume_id, 248 connector) 249 self['connection_info'] = connection_info 250 volume_api.attach(context, volume_id, # callback函数,nova这边的事情处理结束了。该cinder端更新数据库等等 251 instance['uuid'], self['mount_device'])
到这里nova/virt/libvirt/driver.py文件中的attach_volume函数,就是libvirt编程把volume做为backing storage添加到KVM instance的配置文件当中,大致步骤流程,分析拿到KVM hyperv实例中,后端存储是什么类型(i.e NFS,ISCSI,FC)。然后生成对应的KVM 配置文件。主要是把需要attach的volume连接信息添加到libvirt.xml文件中。
LibvirtDriver.attach_volume ==》LibvirtBaseVolumeDriver.connect_volume ==>conf.to_xml() ==> virt_dom.attachDeviceFlags
Cinder follow up
所有volume_api 对应的文件在nova/volume/cinder.py (通过volume.API()函数import并实例化)。这里的API实质上都是cindercient向cinder server端发出的REST 请求。
259 @translate_volume_exception 260 def attach(self, context, volume_id, instance_uuid, mountpoint): 261 cinderclient(context).volumes.attach(volume_id, instance_uuid, 262 mountpoint) ...... 268 @translate_volume_exception 269 def initialize_connection(self, context, volume_id, connector): 270 return cinderclient(context).volumes.initialize_connection(volume_id, 271 connector)
后续的第三篇文章会继续来分析下cinder端如何更新数据库信息。
总结和不足
1. 上述的nova分析主要关注attach volume接口函数的依次调用,最后只到到virtualization driver层,点到为止。不同的虚拟化软件driver给instance添加disk的步骤具体不一。这就需要读者继续分析相应的virt driver代码。
2. REST service的server是如何建立起来的? 还需要读者自行研究WSGI和 paste。
3. RPC 和 message通信分析,都是值得我们研究的内容。这里就不一一介绍。
4. nova的代码庞大,服务众多,这里只简单介绍了nova-api和nova-computer.
第二篇nova inside全文完。转载请指明出处。