openstack-nova源码之创建虚拟机

1.nova/api/openstack/compute/servers.py  create()

在create函数前面包含三种装饰器,分别为:@wsgi.response、@wsgi.expected_errors、@validation.schema

(1)@wsgi.response属于限制装饰器,限制请求完成后的成功状态码202

(2)@wsgi.expected_errors属于限制装饰器,限制请求完成后的失败状态码400、403、409

(3)@validation.schema属于验证装饰器,在请求create函数开始前拦截http请求,对http请求中的参数进行校验,若校验成功则进入create,若失败则返回失败状态码,此装饰器主要功能是进行版本兼容
create函数中主要分为两个功能

(1)收集并校验参数,为后续创建虚拟机做准备

(2)调用nova/compute/api.py中的create()函数,此函数的主要功能是提供实例并将实例信息发送给调度器

  1     @wsgi.response(202)
  2     @wsgi.expected_errors((400, 403, 409))
  3     @validation.schema(schema_servers.base_create_v20, '2.0', '2.0')
  4     @validation.schema(schema_servers.base_create, '2.1', '2.18')
  5     @validation.schema(schema_servers.base_create_v219, '2.19', '2.31')
  6     @validation.schema(schema_servers.base_create_v232, '2.32', '2.32')
  7     @validation.schema(schema_servers.base_create_v233, '2.33', '2.36')
  8     @validation.schema(schema_servers.base_create_v237, '2.37', '2.41')
  9     @validation.schema(schema_servers.base_create_v242, '2.42', '2.51')
 10     @validation.schema(schema_servers.base_create_v252, '2.52', '2.56')
 11     @validation.schema(schema_servers.base_create_v257, '2.57', '2.62')
 12     @validation.schema(schema_servers.base_create_v263, '2.63', '2.66')
 13     @validation.schema(schema_servers.base_create_v267, '2.67', '2.73')
 14     @validation.schema(schema_servers.base_create_v274, '2.74')
 15     def create(self, req, body):
 16         """Creates a new server for a given user."""
 17         context = req.environ['nova.context']
 18         server_dict = body['server']
 19         password = self._get_server_admin_password(server_dict)
 20         name = common.normalize_name(server_dict['name'])
 21         description = name
 22         if api_version_request.is_supported(req, min_version='2.19'):
 23             description = server_dict.get('description')
 24 
 25         # Arguments to be passed to instance create function
 26         create_kwargs = {}
 27 
 28         create_kwargs['user_data'] = server_dict.get('user_data')
 29         # NOTE(alex_xu): The v2.1 API compat mode, we strip the spaces for
 30         # keypair create. But we didn't strip spaces at here for
 31         # backward-compatible some users already created keypair and name with
 32         # leading/trailing spaces by legacy v2 API.
 33         create_kwargs['key_name'] = server_dict.get('key_name')
 34         create_kwargs['config_drive'] = server_dict.get('config_drive')
 35         security_groups = server_dict.get('security_groups')
 36         if security_groups is not None:
 37             create_kwargs['security_groups'] = [
 38                 sg['name'] for sg in security_groups if sg.get('name')]
 39             create_kwargs['security_groups'] = list(
 40                 set(create_kwargs['security_groups']))
 41 
 42         scheduler_hints = {}
 43         if 'os:scheduler_hints' in body:
 44             scheduler_hints = body['os:scheduler_hints']
 45         elif 'OS-SCH-HNT:scheduler_hints' in body:
 46             scheduler_hints = body['OS-SCH-HNT:scheduler_hints']
 47         create_kwargs['scheduler_hints'] = scheduler_hints
 48 
 49         # min_count and max_count are optional.  If they exist, they may come
 50         # in as strings.  Verify that they are valid integers and > 0.
 51         # Also, we want to default 'min_count' to 1, and default
 52         # 'max_count' to be 'min_count'.
 53         min_count = int(server_dict.get('min_count', 1))
 54         max_count = int(server_dict.get('max_count', min_count))
 55         if min_count > max_count:
 56             msg = _('min_count must be <= max_count')
 57             raise exc.HTTPBadRequest(explanation=msg)
 58         create_kwargs['min_count'] = min_count
 59         create_kwargs['max_count'] = max_count
 60 
 61         availability_zone = server_dict.pop("availability_zone", None)
 62 
 63         if api_version_request.is_supported(req, min_version='2.52'):
 64             create_kwargs['tags'] = server_dict.get('tags')
 65 
 66         helpers.translate_attributes(helpers.CREATE,
 67                                      server_dict, create_kwargs)
 68 
 69         target = {
 70             'project_id': context.project_id,
 71             'user_id': context.user_id,
 72             'availability_zone': availability_zone}
 73         context.can(server_policies.SERVERS % 'create', target)
 74 
 75         # Skip policy check for 'create:trusted_certs' if no trusted
 76         # certificate IDs were provided.
 77         trusted_certs = server_dict.get('trusted_image_certificates', None)
 78         if trusted_certs:
 79             create_kwargs['trusted_certs'] = trusted_certs
 80             context.can(server_policies.SERVERS % 'create:trusted_certs',
 81                         target=target)
 82 
 83         parse_az = self.compute_api.parse_availability_zone
 84         try:
 85             availability_zone, host, node = parse_az(context,
 86                                                      availability_zone)
 87         except exception.InvalidInput as err:
 88             raise exc.HTTPBadRequest(explanation=six.text_type(err))
 89         if host or node:
 90             context.can(server_policies.SERVERS % 'create:forced_host', {})
 91 
 92         if api_version_request.is_supported(req, min_version='2.74'):
 93             self._process_hosts_for_create(context, target, server_dict,
 94                                            create_kwargs, host, node)
 95 
 96         self._process_bdms_for_create(
 97             context, target, server_dict, create_kwargs)
 98 
 99         image_uuid = self._image_from_req_data(server_dict, create_kwargs)
100 
101         self._process_networks_for_create(
102             context, target, server_dict, create_kwargs)
103 
104         flavor_id = self._flavor_id_from_req_data(body)
105         try:
106             inst_type = flavors.get_flavor_by_flavor_id(
107                 flavor_id, ctxt=context, read_deleted="no")
108 
109             supports_multiattach = common.supports_multiattach_volume(req)
110             supports_port_resource_request = \
111                 common.supports_port_resource_request(req)
112             (instances, resv_id) = self.compute_api.create(context,
113                                                            inst_type,
114                                                            image_uuid,
115                                                            display_name=name,
116                                                            display_description=description,
117                                                            availability_zone=availability_zone,
118                                                            forced_host=host, forced_node=node,
119                                                            metadata=server_dict.get('metadata', {}),
120                                                            admin_password=password,
121                                                            check_server_group_quota=True,
122                                                            supports_multiattach=supports_multiattach,
123                                                            supports_port_resource_request=supports_port_resource_request,
124                                                            **create_kwargs)
125         except (exception.QuotaError,
126                 exception.PortLimitExceeded) as error:
127             raise exc.HTTPForbidden(
128                 explanation=error.format_message())
129         except exception.ImageNotFound:
130             msg = _("Can not find requested image")
131             raise exc.HTTPBadRequest(explanation=msg)
132         except exception.KeypairNotFound:
133             msg = _("Invalid key_name provided.")
134             raise exc.HTTPBadRequest(explanation=msg)
135         except exception.ConfigDriveInvalidValue:
136             msg = _("Invalid config_drive provided.")
137             raise exc.HTTPBadRequest(explanation=msg)
138         except (exception.BootFromVolumeRequiredForZeroDiskFlavor,
139                 exception.ExternalNetworkAttachForbidden) as error:
140             raise exc.HTTPForbidden(explanation=error.format_message())
141         except messaging.RemoteError as err:
142             msg = "%(err_type)s: %(err_msg)s" % {'err_type': err.exc_type,
143                                                  'err_msg': err.value}
144             raise exc.HTTPBadRequest(explanation=msg)
145         except UnicodeDecodeError as error:
146             msg = "UnicodeError: %s" % error
147             raise exc.HTTPBadRequest(explanation=msg)
148         except (exception.ImageNotActive,
149                 exception.ImageBadRequest,
150                 exception.ImageNotAuthorized,
151                 exception.FixedIpNotFoundForAddress,
152                 exception.FlavorNotFound,
153                 exception.FlavorDiskTooSmall,
154                 exception.FlavorMemoryTooSmall,
155                 exception.InvalidMetadata,
156                 exception.InvalidVolume,
157                 exception.MultiplePortsNotApplicable,
158                 exception.InvalidFixedIpAndMaxCountRequest,
159                 exception.InstanceUserDataMalformed,
160                 exception.PortNotFound,
161                 exception.FixedIpAlreadyInUse,
162                 exception.SecurityGroupNotFound,
163                 exception.PortRequiresFixedIP,
164                 exception.NetworkRequiresSubnet,
165                 exception.NetworkNotFound,
166                 exception.InvalidBDM,
167                 exception.InvalidBDMSnapshot,
168                 exception.InvalidBDMVolume,
169                 exception.InvalidBDMImage,
170                 exception.InvalidBDMBootSequence,
171                 exception.InvalidBDMLocalsLimit,
172                 exception.InvalidBDMVolumeNotBootable,
173                 exception.InvalidBDMEphemeralSize,
174                 exception.InvalidBDMFormat,
175                 exception.InvalidBDMSwapSize,
176                 exception.VolumeTypeNotFound,
177                 exception.AutoDiskConfigDisabledByImage,
178                 exception.InstanceGroupNotFound,
179                 exception.SnapshotNotFound,
180                 exception.UnableToAutoAllocateNetwork,
181                 exception.MultiattachNotSupportedOldMicroversion,
182                 exception.CertificateValidationFailed,
183                 exception.CreateWithPortResourceRequestOldVersion,
184                 exception.ComputeHostNotFound) as error:
185             raise exc.HTTPBadRequest(explanation=error.format_message())
186         except INVALID_FLAVOR_IMAGE_EXCEPTIONS as error:
187             raise exc.HTTPBadRequest(explanation=error.format_message())
188         except (exception.PortInUse,
189                 exception.InstanceExists,
190                 exception.NetworkAmbiguous,
191                 exception.NoUniqueMatch,
192                 exception.VolumeTypeSupportNotYetAvailable) as error:
193             raise exc.HTTPConflict(explanation=error.format_message())
194 
195         # If the caller wanted a reservation_id, return it
196         if server_dict.get('return_reservation_id', False):
197             return wsgi.ResponseObject({'reservation_id': resv_id})
198 
199         server = self._view_builder.create(req, instances[0])
200 
201         if CONF.api.enable_instance_password:
202             server['server']['adminPass'] = password
203 
204         robj = wsgi.ResponseObject(server)
205 
206         return self._add_location(robj)

第17行~第111行的功能为收集数据、校验数据将数据打包

第80行

context.can(server_policies.SERVERS % 'create', target)

最终调用nova/policy.py中authorize()

 1 def authorize(context, action, target=None, do_raise=True, exc=None):
 2     """Verifies that the action is valid on the target in this context.
 3 
 4        :param context: nova context
 5        :param action: string representing the action to be checked
 6            this should be colon separated for clarity.
 7            i.e. ``compute:create_instance``,
 8            ``compute:attach_volume``,
 9            ``volume:attach_volume``
10        :param target: dictionary representing the object of the action
11            for object creation this should be a dictionary representing the
12            location of the object e.g. ``{'project_id': instance.project_id}``
13             If None, then this default target will be considered:
14             {'project_id': self.project_id, 'user_id': self.user_id}
15        :param do_raise: if True (the default), raises PolicyNotAuthorized;
16            if False, returns False
17        :param exc: Class of the exception to raise if the check fails.
18                    Any remaining arguments passed to :meth:`authorize` (both
19                    positional and keyword arguments) will be passed to
20                    the exception class. If not specified,
21                    :class:`PolicyNotAuthorized` will be used.
22 
23        :raises nova.exception.PolicyNotAuthorized: if verification fails
24            and do_raise is True. Or if 'exc' is specified it will raise an
25            exception of that type.
26 
27        :return: returns a non-False value (not necessarily "True") if
28            authorized, and the exact value False if not authorized and
29            do_raise is False.
30     """
31     init()
32     credentials = context.to_policy_values()
33     if not exc:
34         exc = exception.PolicyNotAuthorized
35 
36     # Legacy fallback for emtpy target from context.can()
37     # should be removed once we improve testing and scope checks
38     if target is None:
39         target = default_target(context)
40 
41     try:
42         result = _ENFORCER.authorize(action, target, credentials,
43                                      do_raise=do_raise, exc=exc, action=action)
44     except policy.PolicyNotRegistered:
45         with excutils.save_and_reraise_exception():
46             LOG.exception(_LE('Policy not registered'))
47     except Exception:
48         with excutils.save_and_reraise_exception():
49             LOG.debug('Policy check for %(action)s failed with credentials '
50                       '%(credentials)s',
51                       {'action': action, 'credentials': credentials})
52     return result

第30行init()为一个初始化函数,主要功能是加载路由和配置文件

 1 def init(policy_file=None, rules=None, default_rule=None, use_conf=True):
 2     """Init an Enforcer class.
 3 
 4        :param policy_file: Custom policy file to use, if none is specified,
 5                            `CONF.policy_file` will be used.
 6        :param rules: Default dictionary / Rules to use. It will be
 7                      considered just in the first instantiation.
 8        :param default_rule: Default rule to use, CONF.default_rule will
 9                             be used if none is specified.
10        :param use_conf: Whether to load rules from config file.
11     """
12 
13     global _ENFORCER
14     global saved_file_rules
15 
16     if not _ENFORCER:
17         _ENFORCER = policy.Enforcer(CONF,
18                                     policy_file=policy_file,
19                                     rules=rules,
20                                     default_rule=default_rule,
21                                     use_conf=use_conf)
22         register_rules(_ENFORCER)
23         _ENFORCER.load_rules()
24 
25     # Only the rules which are loaded from file may be changed.
26     current_file_rules = _ENFORCER.file_rules
27     current_file_rules = _serialize_rules(current_file_rules)
28 
29     # Checks whether the rules are updated in the runtime
30     if saved_file_rules != current_file_rules:
31         _warning_for_deprecated_user_based_rules(current_file_rules)
32         saved_file_rules = copy.deepcopy(current_file_rules)

第22行为注册路由功能(详情可参考源码)

跳转至nova/compute/api.py中create()方法

 1     @hooks.add_hook("create_instance")
 2     def create(self, context, instance_type,
 3                image_href, kernel_id=None, ramdisk_id=None,
 4                min_count=None, max_count=None,
 5                display_name=None, display_description=None,
 6                key_name=None, key_data=None, security_groups=None,
 7                availability_zone=None, forced_host=None, forced_node=None,
 8                user_data=None, metadata=None, injected_files=None,
 9                admin_password=None, block_device_mapping=None,
10                access_ip_v4=None, access_ip_v6=None, requested_networks=None,
11                config_drive=None, auto_disk_config=None, scheduler_hints=None,
12                legacy_bdm=True, shutdown_terminate=False,
13                check_server_group_quota=False, tags=None,
14                supports_multiattach=False, trusted_certs=None,
15                supports_port_resource_request=False,
16                requested_host=None, requested_hypervisor_hostname=None):
17         """Provision instances, sending instance information to the
18         scheduler.  The scheduler will determine where the instance(s)
19         go and will handle creating the DB entries.
20 
21         Returns a tuple of (instances, reservation_id)
22         """
23         if requested_networks and max_count is not None and max_count > 1:
24             self._check_multiple_instances_with_specified_ip(
25                 requested_networks)
26             if utils.is_neutron():
27                 self._check_multiple_instances_with_neutron_ports(
28                     requested_networks)
29 
30         if availability_zone:
31             available_zones = availability_zones. \
32                 get_availability_zones(context.elevated(), self.host_api,
33                                        get_only_available=True)
34             if forced_host is None and availability_zone not in \
35                     available_zones:
36                 msg = _('The requested availability zone is not available')
37                 raise exception.InvalidRequest(msg)
38 
39         filter_properties = scheduler_utils.build_filter_properties(
40             scheduler_hints, forced_host, forced_node, instance_type)
41 
42         return self._create_instance(
43             context, instance_type,
44             image_href, kernel_id, ramdisk_id,
45             min_count, max_count,
46             display_name, display_description,
47             key_name, key_data, security_groups,
48             availability_zone, user_data, metadata,
49             injected_files, admin_password,
50             access_ip_v4, access_ip_v6,
51             requested_networks, config_drive,
52             block_device_mapping, auto_disk_config,
53             filter_properties=filter_properties,
54             legacy_bdm=legacy_bdm,
55             shutdown_terminate=shutdown_terminate,
56             check_server_group_quota=check_server_group_quota,
57             tags=tags, supports_multiattach=supports_multiattach,
58             trusted_certs=trusted_certs,
59             supports_port_resource_request=supports_port_resource_request,
60             requested_host=requested_host,
61             requested_hypervisor_hostname=requested_hypervisor_hostname)

整段代码主要对网络ip、端口进行校验,获取可用区列表检验可用区是否属于此列表,并根据条件筛选符合创建虚机所需的主机的条件,最后调用nova/compute/api.py中的_create_instance()方法

  1       def _create_instance(self, context, instance_type,
  2                          image_href, kernel_id, ramdisk_id,
  3                          min_count, max_count,
  4                          display_name, display_description,
  5                          key_name, key_data, security_groups,
  6                          availability_zone, user_data, metadata, injected_files,
  7                          admin_password, access_ip_v4, access_ip_v6,
  8                          requested_networks, config_drive,
  9                          block_device_mapping, auto_disk_config, filter_properties,
 10                          reservation_id=None, legacy_bdm=True, shutdown_terminate=False,
 11                          check_server_group_quota=False, tags=None,
 12                          supports_multiattach=False, trusted_certs=None,
 13                          supports_port_resource_request=False,
 14                          requested_host=None, requested_hypervisor_hostname=None):
 15         """Verify all the input parameters regardless of the provisioning
 16         strategy being performed and schedule the instance(s) for
 17         creation.
 18         """
 19 
 20         # Normalize and setup some parameters
 21         if reservation_id is None:
 22             reservation_id = utils.generate_uid('r')
 23         security_groups = security_groups or ['default']
 24         min_count = min_count or 1
 25         max_count = max_count or min_count
 26         block_device_mapping = block_device_mapping or []
 27         tags = tags or []
 28 
 29         if image_href:
 30             image_id, boot_meta = self._get_image(context, image_href)
 31         else:
 32             # This is similar to the logic in _retrieve_trusted_certs_object.
 33             if (trusted_certs or
 34                     (CONF.glance.verify_glance_signatures and
 35                      CONF.glance.enable_certificate_validation and
 36                      CONF.glance.default_trusted_certificate_ids)):
 37                 msg = _("Image certificate validation is not supported "
 38                         "when booting from volume")
 39                 raise exception.CertificateValidationFailed(message=msg)
 40             image_id = None
 41             boot_meta = self._get_bdm_image_metadata(
 42                 context, block_device_mapping, legacy_bdm)
 43 
 44         self._check_auto_disk_config(image=boot_meta,
 45                                      auto_disk_config=auto_disk_config)
 46 
 47         base_options, max_net_count, key_pair, security_groups, \
 48         network_metadata = self._validate_and_build_base_options(
 49             context, instance_type, boot_meta, image_href, image_id,
 50             kernel_id, ramdisk_id, display_name, display_description,
 51             key_name, key_data, security_groups, availability_zone,
 52             user_data, metadata, access_ip_v4, access_ip_v6,
 53             requested_networks, config_drive, auto_disk_config,
 54             reservation_id, max_count, supports_port_resource_request)
 55 
 56         # max_net_count is the maximum number of instances requested by the
 57         # user adjusted for any network quota constraints, including
 58         # consideration of connections to each requested network
 59         if max_net_count < min_count:
 60             raise exception.PortLimitExceeded()
 61         elif max_net_count < max_count:
 62             LOG.info("max count reduced from %(max_count)d to "
 63                      "%(max_net_count)d due to network port quota",
 64                      {'max_count': max_count,
 65                       'max_net_count': max_net_count})
 66             max_count = max_net_count
 67 
 68         block_device_mapping = self._check_and_transform_bdm(context,
 69                                                              base_options, instance_type, boot_meta, min_count,
 70                                                              max_count,
 71                                                              block_device_mapping, legacy_bdm)
 72 
 73         # We can't do this check earlier because we need bdms from all sources
 74         # to have been merged in order to get the root bdm.
 75         # Set validate_numa=False since numa validation is already done by
 76         # _validate_and_build_base_options().
 77         self._checks_for_create_and_rebuild(context, image_id, boot_meta,
 78                                             instance_type, metadata, injected_files,
 79                                             block_device_mapping.root_bdm(), validate_numa=False)
 80 
 81         instance_group = self._get_requested_instance_group(context,
 82                                                             filter_properties)
 83 
 84         tags = self._create_tag_list_obj(context, tags)
 85 
 86         instances_to_build = self._provision_instances(
 87             context, instance_type, min_count, max_count, base_options,
 88             boot_meta, security_groups, block_device_mapping,
 89             shutdown_terminate, instance_group, check_server_group_quota,
 90             filter_properties, key_pair, tags, trusted_certs,
 91             supports_multiattach, network_metadata,
 92             requested_host, requested_hypervisor_hostname)
 93 
 94         instances = []
 95         request_specs = []
 96         build_requests = []
 97         for rs, build_request, im in instances_to_build:
 98             build_requests.append(build_request)
 99             instance = build_request.get_new_instance(context)
100             instances.append(instance)
101             request_specs.append(rs)
102 
103         self.compute_task_api.schedule_and_build_instances(
104             context,
105             build_requests=build_requests,
106             request_spec=request_specs,
107             image=boot_meta,
108             admin_password=admin_password,
109             injected_files=injected_files,
110             requested_networks=requested_networks,
111             block_device_mapping=block_device_mapping,
112             tags=tags)
113 
114         return instances, reservation_id

整段代码的作用为收集创建虚机所需的disk,image信息,保证创建顺利创建,最后调用nova/conductor/api.py中schedule_and_build_instances()方法

1       def schedule_and_build_instances(self, context, build_requests,
2                                      request_spec, image,
3                                      admin_password, injected_files,
4                                      requested_networks, block_device_mapping,
5                                      tags=None):
6         self.conductor_compute_rpcapi.schedule_and_build_instances(
7             context, build_requests, request_spec, image,
8             admin_password, injected_files, requested_networks,
9             block_device_mapping, tags)

整段代码只起到缓冲作用,立刻调用nova/conductor/rpcapi.py中schedule_and_build_instances()方法

 1     def schedule_and_build_instances(self, context, build_requests,
 2                                      request_specs,
 3                                      image, admin_password, injected_files,
 4                                      requested_networks,
 5                                      block_device_mapping,
 6                                      tags=None):
 7         version = '1.17'
 8         kw = {'build_requests': build_requests,
 9               'request_specs': request_specs,
10               'image': jsonutils.to_primitive(image),
11               'admin_password': admin_password,
12               'injected_files': injected_files,
13               'requested_networks': requested_networks,
14               'block_device_mapping': block_device_mapping,
15               'tags': tags}
16 
17         if not self.client.can_send_version(version):
18             version = '1.16'
19             del kw['tags']
20 
21         cctxt = self.client.prepare(version=version)
22         cctxt.cast(context, 'schedule_and_build_instances', **kw)

整段代码有两个作用,一是进行版本判断,根据版本调整参数,二是使用rpc调用nova/conductor/manager.py中schedule_and_build_instances()方法

  1     def schedule_and_build_instances(self, context, build_requests,
  2                                      request_specs, image,
  3                                      admin_password, injected_files,
  4                                      requested_networks, block_device_mapping,
  5                                      tags=None):
  6         # Add all the UUIDs for the instances
  7         instance_uuids = [spec.instance_uuid for spec in request_specs]
  8         try:
  9             host_lists = self._schedule_instances(context, request_specs[0],
 10                     instance_uuids, return_alternates=True)
 11         except Exception as exc:
 12             LOG.exception('Failed to schedule instances')
 13             self._bury_in_cell0(context, request_specs[0], exc,
 14                                 build_requests=build_requests,
 15                                 block_device_mapping=block_device_mapping,
 16                                 tags=tags)
 17             return
 18 
 19         host_mapping_cache = {}
 20         cell_mapping_cache = {}
 21         instances = []
 22         host_az = {}  # host=az cache to optimize multi-create
 23 
 24         for (build_request, request_spec, host_list) in six.moves.zip(
 25                 build_requests, request_specs, host_lists):
 26             instance = build_request.get_new_instance(context)
 27             # host_list is a list of one or more Selection objects, the first
 28             # of which has been selected and its resources claimed.
 29             host = host_list[0]
 30             # Convert host from the scheduler into a cell record
 31             if host.service_host not in host_mapping_cache:
 32                 try:
 33                     host_mapping = objects.HostMapping.get_by_host(
 34                         context, host.service_host)
 35                     host_mapping_cache[host.service_host] = host_mapping
 36                 except exception.HostMappingNotFound as exc:
 37                     LOG.error('No host-to-cell mapping found for selected '
 38                               'host %(host)s. Setup is incomplete.',
 39                               {'host': host.service_host})
 40                     self._bury_in_cell0(
 41                         context, request_spec, exc,
 42                         build_requests=[build_request], instances=[instance],
 43                         block_device_mapping=block_device_mapping,
 44                         tags=tags)
 45                     # This is a placeholder in case the quota recheck fails.
 46                     instances.append(None)
 47                     continue
 48             else:
 49                 host_mapping = host_mapping_cache[host.service_host]
 50 
 51             cell = host_mapping.cell_mapping
 52 
 53             # Before we create the instance, let's make one final check that
 54             # the build request is still around and wasn't deleted by the user
 55             # already.
 56             try:
 57                 objects.BuildRequest.get_by_instance_uuid(
 58                     context, instance.uuid)
 59             except exception.BuildRequestNotFound:
 60                 # the build request is gone so we're done for this instance
 61                 LOG.debug('While scheduling instance, the build request '
 62                           'was already deleted.', instance=instance)
 63                 # This is a placeholder in case the quota recheck fails.
 64                 instances.append(None)
 65                 # If the build request was deleted and the instance is not
 66                 # going to be created, there is on point in leaving an orphan
 67                 # instance mapping so delete it.
 68                 try:
 69                     im = objects.InstanceMapping.get_by_instance_uuid(
 70                         context, instance.uuid)
 71                     im.destroy()
 72                 except exception.InstanceMappingNotFound:
 73                     pass
 74                 self.report_client.delete_allocation_for_instance(
 75                     context, instance.uuid)
 76                 continue
 77             else:
 78                 if host.service_host not in host_az:
 79                     host_az[host.service_host] = (
 80                         availability_zones.get_host_availability_zone(
 81                             context, host.service_host))
 82                 instance.availability_zone = host_az[host.service_host]
 83                 with obj_target_cell(instance, cell):
 84                     instance.create()
 85                     instances.append(instance)
 86                     cell_mapping_cache[instance.uuid] = cell
 87 
 88         # NOTE(melwitt): We recheck the quota after creating the
 89         # objects to prevent users from allocating more resources
 90         # than their allowed quota in the event of a race. This is
 91         # configurable because it can be expensive if strict quota
 92         # limits are not required in a deployment.
 93         if CONF.quota.recheck_quota:
 94             try:
 95                 compute_utils.check_num_instances_quota(
 96                     context, instance.flavor, 0, 0,
 97                     orig_num_req=len(build_requests))
 98             except exception.TooManyInstances as exc:
 99                 with excutils.save_and_reraise_exception():
100                     self._cleanup_build_artifacts(context, exc, instances,
101                                                   build_requests,
102                                                   request_specs,
103                                                   block_device_mapping, tags,
104                                                   cell_mapping_cache)
105 
106         zipped = six.moves.zip(build_requests, request_specs, host_lists,
107                               instances)
108         for (build_request, request_spec, host_list, instance) in zipped:
109             if instance is None:
110                 # Skip placeholders that were buried in cell0 or had their
111                 # build requests deleted by the user before instance create.
112                 continue
113             cell = cell_mapping_cache[instance.uuid]
114             # host_list is a list of one or more Selection objects, the first
115             # of which has been selected and its resources claimed.
116             host = host_list.pop(0)
117             alts = [(alt.service_host, alt.nodename) for alt in host_list]
118             LOG.debug("Selected host: %s; Selected node: %s; Alternates: %s",
119                     host.service_host, host.nodename, alts, instance=instance)
120             filter_props = request_spec.to_legacy_filter_properties_dict()
121             scheduler_utils.populate_retry(filter_props, instance.uuid)
122             scheduler_utils.populate_filter_properties(filter_props,
123                                                        host)
124 
125             # Now that we have a selected host (which has claimed resource
126             # allocations in the scheduler) for this instance, we may need to
127             # map allocations to resource providers in the request spec.
128             try:
129                 scheduler_utils.fill_provider_mapping(
130                     context, self.report_client, request_spec, host)
131             except Exception as exc:
132                 # If anything failed here we need to cleanup and bail out.
133                 with excutils.save_and_reraise_exception():
134                     self._cleanup_build_artifacts(
135                         context, exc, instances, build_requests, request_specs,
136                         block_device_mapping, tags, cell_mapping_cache)
137 
138             # TODO(melwitt): Maybe we should set_target_cell on the contexts
139             # once we map to a cell, and remove these separate with statements.
140             with obj_target_cell(instance, cell) as cctxt:
141                 # send a state update notification for the initial create to
142                 # show it going from non-existent to BUILDING
143                 # This can lazy-load attributes on instance.
144                 notifications.send_update_with_states(cctxt, instance, None,
145                         vm_states.BUILDING, None, None, service="conductor")
146                 objects.InstanceAction.action_start(
147                     cctxt, instance.uuid, instance_actions.CREATE,
148                     want_result=False)
149                 instance_bdms = self._create_block_device_mapping(
150                     cell, instance.flavor, instance.uuid, block_device_mapping)
151                 instance_tags = self._create_tags(cctxt, instance.uuid, tags)
152 
153             # TODO(Kevin Zheng): clean this up once instance.create() handles
154             # tags; we do this so the instance.create notification in
155             # build_and_run_instance in nova-compute doesn't lazy-load tags
156             instance.tags = instance_tags if instance_tags \
157                 else objects.TagList()
158 
159             # Update mapping for instance. Normally this check is guarded by
160             # a try/except but if we're here we know that a newer nova-api
161             # handled the build process and would have created the mapping
162             inst_mapping = objects.InstanceMapping.get_by_instance_uuid(
163                 context, instance.uuid)
164             inst_mapping.cell_mapping = cell
165             inst_mapping.save()
166 
167             if not self._delete_build_request(
168                     context, build_request, instance, cell, instance_bdms,
169                     instance_tags):
170                 # The build request was deleted before/during scheduling so
171                 # the instance is gone and we don't have anything to build for
172                 # this one.
173                 continue
174 
175             # NOTE(danms): Compute RPC expects security group names or ids
176             # not objects, so convert this to a list of names until we can
177             # pass the objects.
178             legacy_secgroups = [s.identifier
179                                 for s in request_spec.security_groups]
180             with obj_target_cell(instance, cell) as cctxt:
181                 self.compute_rpcapi.build_and_run_instance(
182                     cctxt, instance=instance, image=image,
183                     request_spec=request_spec,
184                     filter_properties=filter_props,
185                     admin_password=admin_password,
186                     injected_files=injected_files,
187                     requested_networks=requested_networks,
188                     security_groups=legacy_secgroups,
189                     block_device_mapping=instance_bdms,
190                     host=host.service_host, node=host.nodename,
191                     limits=host.limits, host_list=host_list)

此方法比较复杂,第9行调用_schedule_instances()获取符合创建虚机的主机

 1         def _schedule_instances(self, context, request_spec,
 2                             instance_uuids=None, return_alternates=False):
 3         scheduler_utils.setup_instance_group(context, request_spec)
 4         with timeutils.StopWatch() as timer:
 5             host_lists = self.query_client.select_destinations(
 6                 context, request_spec, instance_uuids, return_objects=True,
 7                 return_alternates=return_alternates)
 8         LOG.debug('Took %0.2f seconds to select destinations for %s '
 9                   'instance(s).', timer.elapsed(), len(instance_uuids))
10         return host_lists

第5行调用select_destinations()方法,此方法调用shceduler_rpcapi中的select_destinations()方法进行rpc调用

 1        def select_destinations(self, context, spec_obj, instance_uuids,
 2             return_objects=False, return_alternates=False):
 3         """Returns destinations(s) best suited for this request_spec and
 4         filter_properties.
 5 
 6         When return_objects is False, the result will be the "old-style" list
 7         of dicts with 'host', 'nodename' and 'limits' as keys. The value of
 8         return_alternates is ignored.
 9 
10         When return_objects is True, the result will be a list of lists of
11         Selection objects, with one list per instance. Each instance's list
12         will contain a Selection representing the selected (and claimed) host,
13         and, if return_alternates is True, zero or more Selection objects that
14         represent alternate hosts. The number of alternates returned depends on
15         the configuration setting `CONF.scheduler.max_attempts`.
16         """
17         return self.scheduler_rpcapi.select_destinations(context, spec_obj,
18                 instance_uuids, return_objects, return_alternates)
 1     def select_destinations(self, ctxt, spec_obj, instance_uuids,
 2             return_objects=False, return_alternates=False):
 3         # Modify the parameters if an older version is requested
 4         version = '4.5'
 5         msg_args = {'instance_uuids': instance_uuids,
 6                     'spec_obj': spec_obj,
 7                     'return_objects': return_objects,
 8                     'return_alternates': return_alternates}
 9         if not self.client.can_send_version(version):
10             if msg_args['return_objects'] or msg_args['return_alternates']:
11                 # The client is requesting an RPC version we can't support.
12                 raise exc.SelectionObjectsWithOldRPCVersionNotSupported(
13                         version=self.client.version_cap)
14             del msg_args['return_objects']
15             del msg_args['return_alternates']
16             version = '4.4'
17         if not self.client.can_send_version(version):
18             del msg_args['instance_uuids']
19             version = '4.3'
20         if not self.client.can_send_version(version):
21             del msg_args['spec_obj']
22             msg_args['request_spec'] = spec_obj.to_legacy_request_spec_dict()
23             msg_args['filter_properties'
24                      ] = spec_obj.to_legacy_filter_properties_dict()
25             version = '4.0'
26         cctxt = self.client.prepare(
27             version=version, call_monitor_timeout=CONF.rpc_response_timeout,
28             timeout=CONF.long_rpc_timeout)
29         return cctxt.call(ctxt, 'select_destinations', **msg_args)

注:精力有限,后续方法可自行阅读源码,方法如上

回归正题

此方法多为操作数据库,进行数据查询并判断,最后遍历host_list进行虚机创建,此处由于是for循环,所以每创建一个虚机都会调用一次build_and_run_instance()方法

 1     def build_and_run_instance(self, ctxt, instance, host, image, request_spec,
 2             filter_properties, admin_password=None, injected_files=None,
 3             requested_networks=None, security_groups=None,
 4             block_device_mapping=None, node=None, limits=None,
 5             host_list=None):
 6         # NOTE(edleafe): compute nodes can only use the dict form of limits.
 7         if isinstance(limits, objects.SchedulerLimits):
 8             limits = limits.to_dict()
 9         kwargs = {"instance": instance,
10                   "image": image,
11                   "request_spec": request_spec,
12                   "filter_properties": filter_properties,
13                   "admin_password": admin_password,
14                   "injected_files": injected_files,
15                   "requested_networks": requested_networks,
16                   "security_groups": security_groups,
17                   "block_device_mapping": block_device_mapping,
18                   "node": node,
19                   "limits": limits,
20                   "host_list": host_list,
21                  }
22         client = self.router.client(ctxt)
23         version = '5.0'
24         cctxt = client.prepare(server=host, version=version)
25         cctxt.cast(ctxt, 'build_and_run_instance', **kwargs)

此方法主要功能是用rpc调用nova/compute/manager.py中build_and_run_instances()方法

 1     @wrap_exception()
 2     @reverts_task_state
 3     @wrap_instance_fault
 4     def build_and_run_instance(self, context, instance, image, request_spec,
 5                      filter_properties, admin_password=None,
 6                      injected_files=None, requested_networks=None,
 7                      security_groups=None, block_device_mapping=None,
 8                      node=None, limits=None, host_list=None):
 9 
10         @utils.synchronized(instance.uuid)
11         def _locked_do_build_and_run_instance(*args, **kwargs):
12             # NOTE(danms): We grab the semaphore with the instance uuid
13             # locked because we could wait in line to build this instance
14             # for a while and we want to make sure that nothing else tries
15             # to do anything with this instance while we wait.
16             with self._build_semaphore:
17                 try:
18                     result = self._do_build_and_run_instance(*args, **kwargs)
19                 except Exception:
20                     # NOTE(mriedem): This should really only happen if
21                     # _decode_files in _do_build_and_run_instance fails, and
22                     # that's before a guest is spawned so it's OK to remove
23                     # allocations for the instance for this node from Placement
24                     # below as there is no guest consuming resources anyway.
25                     # The _decode_files case could be handled more specifically
26                     # but that's left for another day.
27                     result = build_results.FAILED
28                     raise
29                 finally:
30                     if result == build_results.FAILED:
31                         # Remove the allocation records from Placement for the
32                         # instance if the build failed. The instance.host is
33                         # likely set to None in _do_build_and_run_instance
34                         # which means if the user deletes the instance, it
35                         # will be deleted in the API, not the compute service.
36                         # Setting the instance.host to None in
37                         # _do_build_and_run_instance means that the
38                         # ResourceTracker will no longer consider this instance
39                         # to be claiming resources against it, so we want to
40                         # reflect that same thing in Placement.  No need to
41                         # call this for a reschedule, as the allocations will
42                         # have already been removed in
43                         # self._do_build_and_run_instance().
44                         self.reportclient.delete_allocation_for_instance(
45                             context, instance.uuid)
46 
47                     if result in (build_results.FAILED,
48                                   build_results.RESCHEDULED):
49                         self._build_failed(node)
50                     else:
51                         self._build_succeeded(node)
52 
53         # NOTE(danms): We spawn here to return the RPC worker thread back to
54         # the pool. Since what follows could take a really long time, we don't
55         # want to tie up RPC workers.
56         utils.spawn_n(_locked_do_build_and_run_instance,
57                       context, instance, image, request_spec,
58                       filter_properties, admin_password, injected_files,
59                       requested_networks, security_groups,
60                       block_device_mapping, node, limits, host_list)
 1 def spawn_n(func, *args, **kwargs):
 2     """Passthrough method for eventlet.spawn_n.
 3 
 4     This utility exists so that it can be stubbed for testing without
 5     interfering with the service spawns.
 6 
 7     It will also grab the context from the threadlocal store and add it to
 8     the store on the new thread.  This allows for continuity in logging the
 9     context when using this method to spawn a new thread.
10     """
11     _context = common_context.get_current()
12     profiler_info = _serialize_profile_info()
13 
14     @functools.wraps(func)
15     def context_wrapper(*args, **kwargs):
16         # NOTE: If update_store is not called after spawn_n it won't be
17         # available for the logger to pull from threadlocal storage.
18         if _context is not None:
19             _context.update_store()
20         if profiler_info and profiler:
21             profiler.init(**profiler_info)
22         func(*args, **kwargs)
23 
24     eventlet.spawn_n(context_wrapper, *args, **kwargs)

此方法使用协程进行创建虚机(为保证数据一致性,创建前需要加锁)

 

转载于:https://www.cnblogs.com/jindp/p/11535653.html

你可能感兴趣的:(openstack-nova源码之创建虚拟机)