ICEHOUSE中创建一台虚机的源代码跟踪

ICEHOUSE中创建一台虚机的源代码跟踪

转自:http://bingotree.cn/?p=470

小秦在这篇文章中,会跟踪一下在openstack中建立一台虚机的源代码流程。

1.通过API建立虚机
先来看看如何通过API建立虚机:
先获取token:

[root@CONTROLLER01 ~]# curl -X POST -d '{"auth":{"passwordCredentials":{"username":"admin","password":"adminPass"},"tenantName":"admin"}}' -H "Content-type: application/json" http://192.168.19.95:5000/v2.0/tokens | jq .

获取当前本tenant的虚机列表:

[root@CONTROLLER01 ~]# curl -X GET -H "X-Auth-Token:MIILcQYJKoZIhvcNAQcCoIILYjCCC14CAQExCTAHBgUrDgMCGjCCCccGCSqGSIb3DQEHAaCCCbgEggm0eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNS0xNFQxMzozMTozMS4yNDQ5NzYiLCAiZXhwaXJlcyI6ICIyMDE0LTA1LTE0VDE0OjMxOjMxWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImEzZjU0MjIwMjNkNDQ4NDJiNWM2OTM3ZmQ2NDQ0NmQ5IiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3NC92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3NC92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSIsICJpZCI6ICI4YjgwYjI5NWIzMDE0ODBkODI4OGQ2Njg1ZTAyN2NkYSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo4Nzc0L3YyL2EzZjU0MjIwMjNkNDQ4NDJiNWM2OTM3ZmQ2NDQ0NmQ5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5Njk2IiwgImlkIjogIjA1ZWRjODAxYTAwODQ2ZTc5ODZkMmM5MDAxNzcyNTkzIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjk2OTYifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjIvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjIvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAiaWQiOiAiOWYyZjVkNzk2NTljNGEyZWJlNTY3MjIwNTc4ODQ5OGUiLCAicHVibGljVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3Ni92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcnYyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5MjkyIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5MjkyIiwgImlkIjogIjZhZjRmYjk1MTUzYzRjOWQ5MGMyMThkZTZhOWM0NDQ3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjEvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjEvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAiaWQiOiAiMGQ1ZDE2OGQzNGM3NDU1OGFkMWE3ZDdlMzVlYTMxZDUiLCAicHVibGljVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3Ni92MS9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWUiLCAibmFtZSI6ICJjaW5kZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjUwMDAvdjIuMCIsICJpZCI6ICIwYWFhNTM1YzU1ZDE0MGM5OTgxOGIzM2YzODdiMmQwZSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo1MDAwL3YyLjAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaWRlbnRpdHkiLCAibmFtZSI6ICJrZXlzdG9uZSJ9XSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICI2MGRjODUwMDE4ZmI0YTEzOTMzNTg0ZWMxMmZlMTc0YSIsICJyb2xlcyI6IFt7Im5hbWUiOiAiX21lbWJlcl8ifSwgeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJhZG1pbiJ9LCAibWV0YWRhdGEiOiB7ImlzX2FkbWluIjogMCwgInJvbGVzIjogWyI5ZmUyZmY5ZWU0Mzg0YjE4OTRhOTA4NzhkM2U5MmJhYiIsICJmMjdkMTVkNTY3MWE0ZGZmOWFkMjA2ZWQ3MWFhZGQzMCJdfX19MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNldDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIIBAGu2sKqWLOCtAIshHyCyBp6W3AS6MLTJajyF5RgeMblOnjVzJ-b3ql8sJ9RSQxuTQXrlQU9IYBaG0BMn5pN26iyMEwHpt81xx18282YDc51OyNPsOE7EgKSZTg7hGuvZBC5iwgXdUsiuU0+jR6-rWnpcZ19nPG7dsGJ-w6ZcMZaMeTO-zQj+HbInaneAkghXF55EhyXZSh+gNUTiXuUOtGGGxJVM1jNPkPg6NKsqFtCWLaUWXXERqLKd8-q6BfkDC-YNDSdaSyElE8DX5Y4TDouat+cLGVPUdDU7X2fTPR5g+mz3KeYfd7G5gwfUWQ+l3IeOPpXn78X5nRp8qi-W7G4=" http://192.168.19.95:8774/v2/a3f5422023d44842b5c6937fd64446d9/servers | jq .

创建一台虚机:

[root@CONTROLLER01 ~]# curl -X POST -H "Content-type: application/json" -H "X-Auth-Token:MIILcQYJKoZIhvcNAQcCoIILYjCCC14CAQExCTAHBgUrDgMCGjCCCccGCSqGSIb3DQEHAaCCCbgEggm0eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxNC0wNS0xNFQxMzozMTozMS4yNDQ5NzYiLCAiZXhwaXJlcyI6ICIyMDE0LTA1LTE0VDE0OjMxOjMxWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImEzZjU0MjIwMjNkNDQ4NDJiNWM2OTM3ZmQ2NDQ0NmQ5IiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3NC92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3NC92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSIsICJpZCI6ICI4YjgwYjI5NWIzMDE0ODBkODI4OGQ2Njg1ZTAyN2NkYSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo4Nzc0L3YyL2EzZjU0MjIwMjNkNDQ4NDJiNWM2OTM3ZmQ2NDQ0NmQ5In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5Njk2IiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5Njk2IiwgImlkIjogIjA1ZWRjODAxYTAwODQ2ZTc5ODZkMmM5MDAxNzcyNTkzIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjk2OTYifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjIvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjIvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAiaWQiOiAiOWYyZjVkNzk2NTljNGEyZWJlNTY3MjIwNTc4ODQ5OGUiLCAicHVibGljVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3Ni92Mi9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcnYyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5MjkyIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo5MjkyIiwgImlkIjogIjZhZjRmYjk1MTUzYzRjOWQ5MGMyMThkZTZhOWM0NDQ3IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjkyOTIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaW1hZ2UiLCAibmFtZSI6ICJnbGFuY2UifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjEvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjg3NzYvdjEvYTNmNTQyMjAyM2Q0NDg0MmI1YzY5MzdmZDY0NDQ2ZDkiLCAiaWQiOiAiMGQ1ZDE2OGQzNGM3NDU1OGFkMWE3ZDdlMzVlYTMxZDUiLCAicHVibGljVVJMIjogImh0dHA6Ly9DT05UUk9MTEVSMDE6ODc3Ni92MS9hM2Y1NDIyMDIzZDQ0ODQyYjVjNjkzN2ZkNjQ0NDZkOSJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWUiLCAibmFtZSI6ICJjaW5kZXIifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjM1MzU3L3YyLjAiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vQ09OVFJPTExFUjAxOjUwMDAvdjIuMCIsICJpZCI6ICIwYWFhNTM1YzU1ZDE0MGM5OTgxOGIzM2YzODdiMmQwZSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL0NPTlRST0xMRVIwMTo1MDAwL3YyLjAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaWRlbnRpdHkiLCAibmFtZSI6ICJrZXlzdG9uZSJ9XSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICI2MGRjODUwMDE4ZmI0YTEzOTMzNTg0ZWMxMmZlMTc0YSIsICJyb2xlcyI6IFt7Im5hbWUiOiAiX21lbWJlcl8ifSwgeyJuYW1lIjogImFkbWluIn1dLCAibmFtZSI6ICJhZG1pbiJ9LCAibWV0YWRhdGEiOiB7ImlzX2FkbWluIjogMCwgInJvbGVzIjogWyI5ZmUyZmY5ZWU0Mzg0YjE4OTRhOTA4NzhkM2U5MmJhYiIsICJmMjdkMTVkNTY3MWE0ZGZmOWFkMjA2ZWQ3MWFhZGQzMCJdfX19MYIBgTCCAX0CAQEwXDBXMQswCQYDVQQGEwJVUzEOMAwGA1UECAwFVW5zZXQxDjAMBgNVBAcMBVVuc2V0MQ4wDAYDVQQKDAVVbnNldDEYMBYGA1UEAwwPd3d3LmV4YW1wbGUuY29tAgEBMAcGBSsOAwIaMA0GCSqGSIb3DQEBAQUABIIBAGu2sKqWLOCtAIshHyCyBp6W3AS6MLTJajyF5RgeMblOnjVzJ-b3ql8sJ9RSQxuTQXrlQU9IYBaG0BMn5pN26iyMEwHpt81xx18282YDc51OyNPsOE7EgKSZTg7hGuvZBC5iwgXdUsiuU0+jR6-rWnpcZ19nPG7dsGJ-w6ZcMZaMeTO-zQj+HbInaneAkghXF55EhyXZSh+gNUTiXuUOtGGGxJVM1jNPkPg6NKsqFtCWLaUWXXERqLKd8-q6BfkDC-YNDSdaSyElE8DX5Y4TDouat+cLGVPUdDU7X2fTPR5g+mz3KeYfd7G5gwfUWQ+l3IeOPpXn78X5nRp8qi-W7G4=" -d '{"server":{"name":"KVM02","imageRef":"d2e2295e-d695-47ab-8c29-dacd990713da","flavorRef":"a39484a3-7403-4c9d-a801-b627e26b5067","max_count":1,"min_count":1,"networks":[{"uuid":"56c261c9-05ab-4bcc-b7c9-4e871d9edd9b"}]}}' http://192.168.19.95:8774/v2/a3f5422023d44842b5c6937fd64446d9/servers?availability_zone=nova | jq .

2.相关API的初始化
从之前小秦的几篇文章(nova-compute的初始化,nova-api的初始化)中就很容易找到这个API对应的入口是:

        if init_only is None or 'consoles' in init_only or \
                'servers' in init_only or 'ips' in init_only:
            self.resources['servers'] = servers.create_resource(ext_mgr)
            mapper.resource("server", "servers",
                            controller=self.resources['servers'],
                            collection={'detail': 'GET'},
                            member={'action': 'POST'})

也就是说发出这个POST请求后,会的调用这里的CONTROLLER的方法。这里的servers.create_resource(ext_mgr)最终会的调用nova/api/openstack/extensions.py下的load_standard_extensions方法。后者会的查看nova.conf中的osapi_compute_ext_list。如果这个osapi_compute_ext_list为空,那么nova/api/openstack/compute/contrib下的所有py文件都会的被当成extension导入。如果不为空,则只有osapi_compute_ext_list中写明的extension会的被导入:

def load_standard_extensions(ext_mgr, logger, path, package, ext_list=None):
    """Registers all standard API extensions."""

    # Walk through all the modules in our directory...
    our_dir = path[0]
    for dirpath, dirnames, filenames in os.walk(our_dir):
        # Compute the relative package name from the dirpath
        relpath = os.path.relpath(dirpath, our_dir)
        if relpath == '.':
            relpkg = ''
        else:
            relpkg = '.%s' % '.'.join(relpath.split(os.sep))

        # Now, consider each file in turn, only considering .py files
        for fname in filenames:
            root, ext = os.path.splitext(fname)

            # Skip __init__ and anything that's not .py
            if ext != '.py' or root == '__init__':
                continue

            # Try loading it
            classname = "%s%s" % (root[0].upper(), root[1:])
            classpath = ("%s%s.%s.%s" %
                         (package, relpkg, root, classname))

            if ext_list is not None and classname not in ext_list:
                logger.debug("Skipping extension: %s" % classpath)
                continue

            try:
                ext_mgr.load_extension(classpath)
            except Exception as exc:
                logger.warn(_('Failed to load extension %(classpath)s: '
                              '%(exc)s'),
                            {'classpath': classpath, 'exc': exc})

        # Now, let's consider any subdirectories we may have...
        subdirs = []
        for dname in dirnames:
            # Skip it if it does not have __init__.py
            if not os.path.exists(os.path.join(dirpath, dname, '__init__.py')):
                continue

            # If it has extension(), delegate...
            ext_name = "%s%s.%s.extension" % (package, relpkg, dname)
            try:
                ext = importutils.import_class(ext_name)
            except ImportError:
                # extension() doesn't exist on it, so we'll explore
                # the directory for ourselves
                subdirs.append(dname)
            else:
                try:
                    ext(ext_mgr)
                except Exception as exc:
                    logger.warn(_('Failed to load extension %(ext_name)s:'
                                  '%(exc)s'),
                                {'ext_name': ext_name, 'exc': exc})

        # Update the list of directories we'll explore...
        dirnames[:] = subdirs

这里的ext_mgr.load_extension(classpath)最终会调用register注册一个extension:

    def register(self, ext):
        # Do nothing if the extension doesn't check out
        if not self._check_extension(ext):
            return

        alias = ext.alias
        LOG.audit(_('Loaded extension: %s'), alias)

        if alias in self.extensions:
            raise exception.NovaException("Found duplicate extension: %s"
                                          % alias)
        self.extensions[alias] = ext
        self.sorted_ext_list = None

可以看到,最终的查找是根据alias来查找extension的,每个extension都会有一个alias,如:

class Quotas(extensions.ExtensionDescriptor):
    """Quotas management support."""

    name = "Quotas"
    alias = "os-quota-sets"
    namespace = "http://docs.openstack.org/compute/ext/quotas-sets/api/v1.1"
    updated = "2011-08-08T00:00:00+00:00"

    def get_resources(self):
        resources = []

        res = extensions.ResourceExtension('os-quota-sets',
                                            QuotaSetsController(self.ext_mgr),
                                            member_actions={'defaults': 'GET'})
        resources.append(res)

        return resources

回到我们的create server中,根据小秦之前的nova相关的文章和上面的分析,很容易就可以定义到对应的controller是nova/api/openstack/compute.server.py。如果对controller的概念不是很清楚可以看这里。在这个文件中的create方法就是我们的创建虚拟机的方法了。这个方法比较长,我们一步一步的来看。

3.create方法

    def create(self, req, body):
        """Creates a new server for a given user."""
        if not self.is_valid_body(body, 'server'):
            raise exc.HTTPUnprocessableEntity()

        context = req.environ['nova.context']
        server_dict = body['server']
        password = self._get_server_admin_password(server_dict)

        if 'name' not in server_dict:
            msg = _("Server name is not defined")
            raise exc.HTTPBadRequest(explanation=msg)

        name = server_dict['name']
        self._validate_server_name(name)
        name = name.strip()

        image_uuid = self._image_from_req_data(body)

可以看到,这里显示从http request的body中查看是否有adminPass,如果看下_get_server_admin_password可以发现这里对admin的强度是没有检查的。在这之后是查看server的name是否指定,在小秦上面的例子中就是KVM02。最后一行的代码是查看request中和image相关的信息,具体的实现是:

    def _image_from_req_data(self, data):
        """Get image data from the request or raise appropriate
        exceptions

        If no image is supplied - checks to see if there is
        block devices set and proper extesions loaded.
        """
        image_ref = data['server'].get('imageRef')
        bdm = data['server'].get('block_device_mapping')
        bdm_v2 = data['server'].get('block_device_mapping_v2')

        if (not image_ref and (
                (bdm and self.ext_mgr.is_loaded('os-volumes')) or
                (bdm_v2 and
                 self.ext_mgr.is_loaded('os-block-device-mapping-v2-boot')))):
            return ''
        else:
            image_href = self._image_ref_from_req_data(data)
            image_uuid = self._image_uuid_from_href(image_href)
            return image_uuid

接下来的代码是:

        personality = server_dict.get('personality')
        config_drive = None
        if self.ext_mgr.is_loaded('os-config-drive'):
            config_drive = server_dict.get('config_drive')

这里的话查看body中有没有personality,有的话如果加载了os-config-drive这个插件的话,就导入body中指定的config_drive。
继续看下去:

        injected_files = []
        if personality:
            injected_files = self._get_injected_files(personality)
    def _get_injected_files(self, personality):
        """Create a list of injected files from the personality attribute.

        At this time, injected_files must be formatted as a list of
        (file_path, file_content) pairs for compatibility with the
        underlying compute service.
        """
        injected_files = []

        for item in personality:
            try:
                path = item['path']
                contents = item['contents']
            except KeyError as key:
                expl = _('Bad personality format: missing %s') % key
                raise exc.HTTPBadRequest(explanation=expl)
            except TypeError:
                expl = _('Bad personality format')
                raise exc.HTTPBadRequest(explanation=expl)
            if self._decode_base64(contents) is None:
                expl = _('Personality content for %s cannot be decoded') % path
                raise exc.HTTPBadRequest(explanation=expl)
            injected_files.append((path, contents))
        return injected_files

可以看到,如果我们的body中有personality这个项目的话,我们可以指定一系列可以被注入的文件。这里就是把这些文件放入到内存中。
继续看代码:

        sg_names = []
        if self.ext_mgr.is_loaded('os-security-groups'):
            security_groups = server_dict.get('security_groups')
            if security_groups is not None:
                sg_names = [sg['name'] for sg in security_groups
                            if sg.get('name')]
        if not sg_names:
            sg_names.append('default')
			
		sg_names = list(set(sg_names))

这里导入security_groups。继续看代码:

        requested_networks = None
        if (self.ext_mgr.is_loaded('os-networks')
                or utils.is_neutron()):
            requested_networks = server_dict.get('networks')
        if requested_networks is not None:
            if not isinstance(requested_networks, list):
                expl = _('Bad networks format')
                raise exc.HTTPBadRequest(explanation=expl)
            requested_networks = self._get_requested_networks(
                requested_networks)

可以看到这里是获取我们body中指定的network的值。然后会对这个值用_get_requested_networks进行分析,这个分析主要是为了得到network的uuid这样的东西(因为输入可能不是uuid,可能是名字啥的)。另外如果这个字段指定了fixed ip,那么就会分配这个ip,否则会从池子里拿一个ip。
继续看代码:

        (access_ip_v4, ) = server_dict.get('accessIPv4'),
        if access_ip_v4 is not None:
            self._validate_access_ipv4(access_ip_v4)

        (access_ip_v6, ) = server_dict.get('accessIPv6'),
        if access_ip_v6 is not None:
            self._validate_access_ipv6(access_ip_v6)

这里查看body中有没有指定access ip。

        try:
            flavor_id = self._flavor_id_from_req_data(body)
        except ValueError as error:
            msg = _("Invalid flavorRef provided.")
            raise exc.HTTPBadRequest(explanation=msg)

这里是获取flavor的id。

        # optional openstack extensions:
        key_name = None
        if self.ext_mgr.is_loaded('os-keypairs'):
            key_name = server_dict.get('key_name')

        user_data = None
        if self.ext_mgr.is_loaded('os-user-data'):
            user_data = server_dict.get('user_data')
        self._validate_user_data(user_data)

        availability_zone = None
        if self.ext_mgr.is_loaded('os-availability-zone'):
            availability_zone = server_dict.get('availability_zone')

这里的含义也很简单,查看是否指定了key或user_data或availability_zone。

        block_device_mapping = None
        block_device_mapping_v2 = None
        legacy_bdm = True
        if self.ext_mgr.is_loaded('os-volumes'):
            block_device_mapping = server_dict.get('block_device_mapping', [])
            for bdm in block_device_mapping:
                try:
                    block_device.validate_device_name(bdm.get("device_name"))
                    block_device.validate_and_default_volume_size(bdm)
                except exception.InvalidBDMFormat as e:
                    raise exc.HTTPBadRequest(explanation=e.format_message())

                if 'delete_on_termination' in bdm:
                    bdm['delete_on_termination'] = strutils.bool_from_string(
                        bdm['delete_on_termination'])

            if self.ext_mgr.is_loaded('os-block-device-mapping-v2-boot'):
                # Consider the new data format for block device mapping
                block_device_mapping_v2 = server_dict.get(
                    'block_device_mapping_v2', [])
                # NOTE (ndipanov):  Disable usage of both legacy and new
                #                   block device format in the same request
                if block_device_mapping and block_device_mapping_v2:
                    expl = _('Using different block_device_mapping syntaxes '
                             'is not allowed in the same request.')
                    raise exc.HTTPBadRequest(explanation=expl)

                # Assume legacy format
                legacy_bdm = not bool(block_device_mapping_v2)

                try:
                    block_device_mapping_v2 = [
                        block_device.BlockDeviceDict.from_api(bdm_dict)
                        for bdm_dict in block_device_mapping_v2]
                except exception.InvalidBDMFormat as e:
                    raise exc.HTTPBadRequest(explanation=e.format_message())

        block_device_mapping = (block_device_mapping or
                                block_device_mapping_v2)

        ret_resv_id = False

这里是对block做相关的操作。首先获取body中的block_device_mapping字段,然后检查下每个block device的名字和大小符不符合规范。接着判断body中是否指明了delete_on_termination这个东西。

        min_count = 1
        max_count = 1
        if self.ext_mgr.is_loaded('os-multiple-create'):
            ret_resv_id = server_dict.get('return_reservation_id', False)
            min_count = server_dict.get('min_count', 1)
            max_count = server_dict.get('max_count', min_count)

        try:
            min_count = utils.validate_integer(
                min_count, "min_count", min_value=1)
            max_count = utils.validate_integer(
                max_count, "max_count", min_value=1)
        except exception.InvalidInput as e:
            raise exc.HTTPBadRequest(explanation=e.format_message())

        if min_count > max_count:
            msg = _('min_count must be <= max_count')
            raise exc.HTTPBadRequest(explanation=msg)

检查min_count和max_count是否存在,存在的话是否是有效值。

        auto_disk_config = False
        if self.ext_mgr.is_loaded('OS-DCF'):
            auto_disk_config = server_dict.get('auto_disk_config')

这里查看了是否设置了auto_disk_config这个字段。

        scheduler_hints = {}
        if self.ext_mgr.is_loaded('OS-SCH-HNT'):
            scheduler_hints = server_dict.get('scheduler_hints', {})

类似于数据库sql优化器中的hint,这里查看对调度算法是否有什么hints。

        try:
            _get_inst_type = flavors.get_flavor_by_flavor_id
            inst_type = _get_inst_type(flavor_id, ctxt=context,
                                       read_deleted="no")

            (instances, resv_id) = self.compute_api.create(context,
                            inst_type,
                            image_uuid,
                            display_name=name,
                            display_description=name,
                            key_name=key_name,
                            metadata=server_dict.get('metadata', {}),
                            access_ip_v4=access_ip_v4,
                            access_ip_v6=access_ip_v6,
                            injected_files=injected_files,
                            admin_password=password,
                            min_count=min_count,
                            max_count=max_count,
                            requested_networks=requested_networks,
                            security_group=sg_names,
                            user_data=user_data,
                            availability_zone=availability_zone,
                            config_drive=config_drive,
                            block_device_mapping=block_device_mapping,
                            auto_disk_config=auto_disk_config,
                            scheduler_hints=scheduler_hints,
                            legacy_bdm=legacy_bdm)
        except (exception.QuotaError,
                exception.PortLimitExceeded) as error:
            raise exc.HTTPRequestEntityTooLarge(
                explanation=error.format_message(),
                headers={'Retry-After': 0})
        except exception.InvalidMetadataSize as error:
            raise exc.HTTPRequestEntityTooLarge(
                explanation=error.format_message())
        except exception.ImageNotFound as error:
            msg = _("Can not find requested image")
            raise exc.HTTPBadRequest(explanation=msg)
        except exception.FlavorNotFound as error:
            msg = _("Invalid flavorRef provided.")
            raise exc.HTTPBadRequest(explanation=msg)
        except exception.KeypairNotFound as error:
            msg = _("Invalid key_name provided.")
            raise exc.HTTPBadRequest(explanation=msg)
        except exception.ConfigDriveInvalidValue:
            msg = _("Invalid config_drive provided.")
            raise exc.HTTPBadRequest(explanation=msg)
        except messaging.RemoteError as err:
            msg = "%(err_type)s: %(err_msg)s" % {'err_type': err.exc_type,
                                                 'err_msg': err.value}
            raise exc.HTTPBadRequest(explanation=msg)
        except UnicodeDecodeError as error:
            msg = "UnicodeError: %s" % unicode(error)
            raise exc.HTTPBadRequest(explanation=msg)
        except (exception.ImageNotActive,
                exception.FlavorDiskTooSmall,
                exception.FlavorMemoryTooSmall,
                exception.InvalidMetadata,
                exception.InvalidRequest,
                exception.MultiplePortsNotApplicable,
                exception.NetworkNotFound,
                exception.PortNotFound,
                exception.SecurityGroupNotFound,
                exception.InvalidBDM,
                exception.PortRequiresFixedIP,
                exception.NetworkRequiresSubnet,
                exception.InstanceUserDataMalformed) as error:
            raise exc.HTTPBadRequest(explanation=error.format_message())
        except (exception.PortInUse,
                exception.NoUniqueMatch) as error:
            raise exc.HTTPConflict(explanation=error.format_message())

这里就是调用compute服务创建虚机了。这个我们后面再看。我们看下收尾的几行代码:

        # If the caller wanted a reservation_id, return it
        if ret_resv_id:
            return wsgi.ResponseObject({'reservation_id': resv_id},
                                       xml=ServerMultipleCreateTemplate)

        req.cache_db_instances(instances)
        server = self._view_builder.create(req, instances[0])

        if CONF.enable_instance_password:
            server['server']['adminPass'] = password

        robj = wsgi.ResponseObject(server)

        return self._add_location(robj)

这里首先判断下用户是否需要得到reservation_id,是的话把这个id返回就行了。不然的话就调用_view_builder进行response的生成,后者主要是把一些内容格式化成JSON的数据,然后返回给用户。

4.nova-compute对虚拟机的创建过程
这里来详细分析下建立compute建立虚拟机的过程。先看这两行:

            _get_inst_type = flavors.get_flavor_by_flavor_id
            inst_type = _get_inst_type(flavor_id, ctxt=context,
                                       read_deleted="no")

这里主要是根据flavor id,得到具体的配置,主要就是去查数据库:

def get_flavor_by_flavor_id(flavorid, ctxt=None, read_deleted="yes"):
    """Retrieve flavor by flavorid.

    :raises: FlavorNotFound
    """
    if ctxt is None:
        ctxt = context.get_admin_context(read_deleted=read_deleted)

    return db.flavor_get_by_flavor_id(ctxt, flavorid, read_deleted)
@require_context
def flavor_get_by_flavor_id(context, flavor_id, read_deleted):
    """Returns a dict describing specific flavor_id."""
    result = _flavor_get_query(context, read_deleted=read_deleted).\
                        filter_by(flavorid=flavor_id).\
                        order_by(asc("deleted"), asc("id")).\
                        first()
    if not result:
        raise exception.FlavorNotFound(flavor_id=flavor_id)
    return _dict_with_extra_specs(result)

ok,来看self.compute_api.create吧:

   @hooks.add_hook("create_instance")
    def create(self, context, instance_type,
               image_href, kernel_id=None, ramdisk_id=None,
               min_count=None, max_count=None,
               display_name=None, display_description=None,
               key_name=None, key_data=None, security_group=None,
               availability_zone=None, user_data=None, metadata=None,
               injected_files=None, admin_password=None,
               block_device_mapping=None, access_ip_v4=None,
               access_ip_v6=None, requested_networks=None, config_drive=None,
               auto_disk_config=None, scheduler_hints=None, legacy_bdm=True):
        """Provision instances, sending instance information to the
        scheduler.  The scheduler will determine where the instance(s)
        go and will handle creating the DB entries.

        Returns a tuple of (instances, reservation_id)
        """

        self._check_create_policies(context, availability_zone,
                requested_networks, block_device_mapping)

        if requested_networks and max_count > 1 and utils.is_neutron():
            self._check_multiple_instances_neutron_ports(requested_networks)

        return self._create_instance(
                               context, instance_type,
                               image_href, kernel_id, ramdisk_id,
                               min_count, max_count,
                               display_name, display_description,
                               key_name, key_data, security_group,
                               availability_zone, user_data, metadata,
                               injected_files, admin_password,
                               access_ip_v4, access_ip_v6,
                               requested_networks, config_drive,
                               block_device_mapping, auto_disk_config,
                               scheduler_hints=scheduler_hints,
                               legacy_bdm=legacy_bdm)

首先先是检查policy,看看有没有足够的权限做相关的操作。关于policy可以看小秦之前的这篇文章:

    def _check_create_policies(self, context, availability_zone,
            requested_networks, block_device_mapping):
        """Check policies for create()."""
        target = {'project_id': context.project_id,
                  'user_id': context.user_id,
                  'availability_zone': availability_zone}
        check_policy(context, 'create', target)

        if requested_networks:
            check_policy(context, 'create:attach_network', target)

        if block_device_mapping:
            check_policy(context, 'create:attach_volume', target)

然后是查看neutron下的网络参数是否合理:

        if requested_networks and max_count > 1 and utils.is_neutron():
            self._check_multiple_instances_neutron_ports(requested_networks)

接着则是调用self._create_instance做创建操作。这个方法也比较长,我们分开来看:

def _create_instance(self, context, instance_type,
               image_href, kernel_id, ramdisk_id,
               min_count, max_count,
               display_name, display_description,
               key_name, key_data, security_groups,
               availability_zone, user_data, metadata,
               injected_files, admin_password,
               access_ip_v4, access_ip_v6,
               requested_networks, config_drive,
               block_device_mapping, auto_disk_config,
               reservation_id=None, scheduler_hints=None,
               legacy_bdm=True):
        """Verify all the input parameters regardless of the provisioning
        strategy being performed and schedule the instance(s) for
        creation.
        """

        # Normalize and setup some parameters
        if reservation_id is None:
            reservation_id = utils.generate_uid('r')
        security_groups = security_groups or ['default']
        min_count = min_count or 1
        max_count = max_count or min_count
        block_device_mapping = block_device_mapping or []
        if not instance_type:
            instance_type = flavors.get_default_flavor()

可以看到,一开始先是做一些参数的先查,如果没有提供flavor id的话,这里的instance_type会用默认的。

        if image_href:
            image_id, boot_meta = self._get_image(context, image_href)
        else:
            image_id = None
            boot_meta = {}
            boot_meta['properties'] = \
                self._get_bdm_image_metadata(context,
                    block_device_mapping, legacy_bdm)

这里先看有image_herf的这种情况:

    def _get_image(self, context, image_href):
        if not image_href:
            return None, {}

        (image_service, image_id) = glance.get_remote_image_service(
                context, image_href)
        image = image_service.show(context, image_id)
        return image_id, image

这里先是调用glance.get_remote_image_service:

def get_remote_image_service(context, image_href):
    """Create an image_service and parse the id from the given image_href.

    The image_href param can be an href of the form
    'http://example.com:9292/v1/images/b8b2c6f7-7345-4e2f-afa2-eedaba9cbbe3',
    or just an id such as 'b8b2c6f7-7345-4e2f-afa2-eedaba9cbbe3'. If the
    image_href is a standalone id, then the default image service is returned.

    :param image_href: href that describes the location of an image
    :returns: a tuple of the form (image_service, image_id)

    """
    #NOTE(bcwaldon): If image_href doesn't look like a URI, assume its a
    # standalone image ID
    if '/' not in str(image_href):
        image_service = get_default_image_service()
        return image_service, image_href

    try:
        (image_id, glance_host, glance_port, use_ssl) = \
            _parse_image_ref(image_href)
        glance_client = GlanceClientWrapper(context=context,
                host=glance_host, port=glance_port, use_ssl=use_ssl)
    except ValueError:
        raise exception.InvalidImageRef(image_href=image_href)

    image_service = GlanceImageService(client=glance_client)
    return image_service, image_id

小秦我这里就是一串小的字符,所以调用下面的方法:

def get_default_image_service():
    return GlanceImageService()
class GlanceImageService(object):
    """Provides storage and retrieval of disk image objects within Glance."""

    def __init__(self, client=None):
        self._client = client or GlanceClientWrapper()
        #NOTE(jbresnah) build the table of download handlers at the beginning
        # so that operators can catch errors at load time rather than whenever
        # a user attempts to use a module.  Note this cannot be done in glance
        # space when this python module is loaded because the download module
        # may require configuration options to be parsed.
        self._download_handlers = {}
        download_modules = image_xfers.load_transfer_modules()

        for scheme, mod in download_modules.iteritems():
            if scheme not in CONF.allowed_direct_url_schemes:
                continue

            try:
                self._download_handlers[scheme] = mod.get_download_handler()
            except Exception as ex:
                fmt = _('When loading the module %(module_str)s the '
                         'following error occurred: %(ex)s')
                LOG.error(fmt % {'module_str': str(mod), 'ex': ex})

可以看到,其实得到的就是一个以后可以用来下载image的对象。然后我们看下面的:

		image = image_service.show(context, image_id)

具体的代码是:

    def show(self, context, image_id):
        """Returns a dict with image data for the given opaque image id."""
        try:
            image = self._client.call(context, 1, 'get', image_id)
        except Exception:
            _reraise_translated_image_exception(image_id)

        if not _is_image_available(context, image):
            raise exception.ImageNotFound(image_id=image_id)

        base_image_meta = _translate_from_glance(image)
        return base_image_meta

可以看到,这里调用了rpc去获取相关的image的信息。
继续看代码:

        self._check_auto_disk_config(image=boot_meta,
                                     auto_disk_config=auto_disk_config)
    def _check_auto_disk_config(self, instance=None, image=None,
                                **extra_instance_updates):
        auto_disk_config = extra_instance_updates.get("auto_disk_config")
        if auto_disk_config is None:
            return
        if not image and not instance:
            return

        if image:
            image_props = image.get("properties", {})
            auto_disk_config_img = \
                utils.get_auto_disk_config_from_image_props(image_props)
            image_ref = image.get("id")
        else:
            sys_meta = utils.instance_sys_meta(instance)
            image_ref = sys_meta.get('image_base_image_ref')
            auto_disk_config_img = \
                utils.get_auto_disk_config_from_instance(sys_meta=sys_meta)

        self._ensure_auto_disk_config_is_valid(auto_disk_config_img,
                                               auto_disk_config,
                                               image_ref)

首先是获取一下disk的基本信息,然后调用_ensure_auto_disk_config_is_valid做最后的确认。后者最后就是看下元信息中允不允许auto config:

def is_auto_disk_config_disabled(auto_disk_config_raw):
    auto_disk_config_disabled = False
    if auto_disk_config_raw is not None:
        adc_lowered = auto_disk_config_raw.strip().lower()
        if adc_lowered == "disabled":
            auto_disk_config_disabled = True
    return auto_disk_config_disabled

继续看代码:

        handle_az = self._handle_availability_zone
        availability_zone, forced_host, forced_node = handle_az(context,
                                                            availability_zone)
    @staticmethod
    def _handle_availability_zone(context, availability_zone):
        # NOTE(vish): We have a legacy hack to allow admins to specify hosts
        #             via az using az:host:node. It might be nice to expose an
        #             api to specify specific hosts to force onto, but for
        #             now it just supports this legacy hack.
        # NOTE(deva): It is also possible to specify az::node, in which case
        #             the host manager will determine the correct host.
        forced_host = None
        forced_node = None
        if availability_zone and ':' in availability_zone:
            c = availability_zone.count(':')
            if c == 1:
                availability_zone, forced_host = availability_zone.split(':')
            elif c == 2:
                if '::' in availability_zone:
                    availability_zone, forced_node = \
                            availability_zone.split('::')
                else:
                    availability_zone, forced_host, forced_node = \
                            availability_zone.split(':')
            else:
                raise exception.InvalidInput(
                        reason="Unable to parse availability_zone")

        if not availability_zone:
            availability_zone = CONF.default_schedule_zone

        if forced_host:
            check_policy(context, 'create:forced_host', {})
        if forced_node:
            check_policy(context, 'create:forced_host', {})

        return availability_zone, forced_host, forced_node

这里首先解析参数的格式,如果是az:host:node,那么就解析成三个。如果是az::node,那么就解析两个。如果没有提供az这个参数的话,az就指定为CONF.default_schedule_zone。最后则是看下有没有权限做这些操作。

        base_options, max_net_count = self._validate_and_build_base_options(
                context,
                instance_type, boot_meta, image_href, image_id, kernel_id,
                ramdisk_id, display_name, display_description,
                key_name, key_data, security_groups, availability_zone,
                forced_host, user_data, metadata, injected_files, access_ip_v4,
                access_ip_v6, requested_networks, config_drive,
                block_device_mapping, auto_disk_config, reservation_id,
                max_count)

这个方法小秦就不列出来了,就是做最后的一个整理与过滤。

        # max_net_count is the maximum number of instances requested by the
        # user adjusted for any network quota constraints, including
        # considertaion of connections to each requested network
        if max_net_count == 0:
            raise exception.PortLimitExceeded()
        elif max_net_count < max_count:
            LOG.debug(_("max count reduced from %(max_count)d to "
                        "%(max_net_count)d due to network port quota"),
                       {'max_count': max_count,
                        'max_net_count': max_net_count})
            max_count = max_net_count

这里是判断根据network quota,实际能创建的最大的虚机个数。

        block_device_mapping = self._check_and_transform_bdm(
            base_options, boot_meta, min_count, max_count,
            block_device_mapping, legacy_bdm)

这里主要是获取block device的mapping关系。

        filter_properties = self._build_filter_properties(context,
                scheduler_hints, forced_host, forced_node, instance_type)
        instances = self._provision_instances(context, instance_type,
                min_count, max_count, base_options, boot_meta, security_groups,
                block_device_mapping)

这里做一下和quota相关的检查。

    def _build_filter_properties(self, context, scheduler_hints, forced_host,
            forced_node, instance_type):
        filter_properties = dict(scheduler_hints=scheduler_hints)
        filter_properties['instance_type'] = instance_type
        if forced_host:
            filter_properties['force_hosts'] = [forced_host]
        if forced_node:
            filter_properties['force_nodes'] = [forced_node]
        return filter_properties

这里构造了一个filter的字典。

        self._update_instance_group(context, instances, scheduler_hints)

这里如果存在instance group的话,会做一些相关的操作。

        for instance in instances:
            self._record_action_start(context, instance,
                                      instance_actions.CREATE)

这里主要是在数据库中添加start状态的信息:

    @base.remotable_classmethod
    def action_start(cls, context, instance_uuid, action_name,
                     want_result=True):
        values = cls.pack_action_start(context, instance_uuid, action_name)
        db_action = db.action_start(context, values)
        if want_result:
            return cls._from_db_object(context, cls(), db_action)

继续看代码:

        self.compute_task_api.build_instances(context,
                instances=instances, image=boot_meta,
                filter_properties=filter_properties,
                admin_password=admin_password,
                injected_files=injected_files,
                requested_networks=requested_networks,
                security_groups=security_groups,
                block_device_mapping=block_device_mapping,
                legacy_bdm=False)

上面做了这么多其实还是没有建立虚机,所以这里才是我们的关键方法。我们来看下:

    @property
    def compute_task_api(self):
        if self._compute_task_api is None:
            # TODO(alaski): Remove calls into here from conductor manager so
            # that this isn't necessary. #1180540
            from nova import conductor
            self._compute_task_api = conductor.ComputeTaskAPI()
        return self._compute_task_api

来看下这个conductor.ComputeTaskAPI():

def ComputeTaskAPI(*args, **kwargs):
    use_local = kwargs.pop('use_local', False)
    if oslo.config.cfg.CONF.conductor.use_local or use_local:
        api = conductor_api.LocalComputeTaskAPI
    else:
        api = conductor_api.ComputeTaskAPI
    return api(*args, **kwargs)

这里就看local的吧:

    def build_instances(self, context, instances, image,
            filter_properties, admin_password, injected_files,
            requested_networks, security_groups, block_device_mapping,
            legacy_bdm=True):
        utils.spawn_n(self._manager.build_instances, context,
                instances=instances, image=image,
                filter_properties=filter_properties,
                admin_password=admin_password, injected_files=injected_files,
                requested_networks=requested_networks,
                security_groups=security_groups,
                block_device_mapping=block_device_mapping,
                legacy_bdm=legacy_bdm)

这里通过spawn_n将具体的活放在一个协程中,协程的入口是self._manager.build_instances,后面的几个都是参数。这里的manager是:

    def __init__(self):
        # TODO(danms): This needs to be something more generic for
        # other/future users of this sort of functionality.
        self._manager = utils.ExceptionHelper(
                manager.ComputeTaskManager())
class ComputeTaskManager(base.Base):
    """Namespace for compute methods.

    This class presents an rpc API for nova-conductor under the 'compute_task'
    namespace.  The methods here are compute operations that are invoked
    by the API service.  These methods see the operation to completion, which
    may involve coordinating activities on multiple compute nodes.
    """

    target = messaging.Target(namespace='compute_task', version='1.6')

    def __init__(self):
        super(ComputeTaskManager, self).__init__()
        self.compute_rpcapi = compute_rpcapi.ComputeAPI()
        self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI()
        self.image_service = glance.get_default_image_service()

可以看到,compute、scheduler和image都全了。我们看下build_instances方法:

    def build_instances(self, context, instances, image, filter_properties,
            admin_password, injected_files, requested_networks,
            security_groups, block_device_mapping, legacy_bdm=True):
        request_spec = scheduler_utils.build_request_spec(context, image,
                                                          instances)
        # NOTE(alaski): For compatibility until a new scheduler method is used.
        request_spec.update({'block_device_mapping': block_device_mapping,
                             'security_group': security_groups})
        self.scheduler_rpcapi.run_instance(context, request_spec=request_spec,
                admin_password=admin_password, injected_files=injected_files,
                requested_networks=requested_networks, is_first_time=True,
                filter_properties=filter_properties,
                legacy_bdm_in_spec=legacy_bdm)

终于轮到我们的scheduler出场了。首先的两个是为scheduler准备下参数,然后运行scheduler的run_instance方法。根据小秦之前的关于oslo.messaging的文章,很容易就能找到具体的代码是:

    def run_instance(self, context, request_spec, admin_password,
            injected_files, requested_networks, is_first_time,
            filter_properties, legacy_bdm_in_spec=True):
        """Tries to call schedule_run_instance on the driver.
        Sets instance vm_state to ERROR on exceptions
        """
        instance_uuids = request_spec['instance_uuids']
        with compute_utils.EventReporter(context, conductor_api.LocalAPI(),
                                         'schedule', *instance_uuids):
            try:
                return self.driver.schedule_run_instance(context,
                        request_spec, admin_password, injected_files,
                        requested_networks, is_first_time, filter_properties,
                        legacy_bdm_in_spec)

            except exception.NoValidHost as ex:
                # don't re-raise
                self._set_vm_state_and_notify('run_instance',
                                              {'vm_state': vm_states.ERROR,
                                              'task_state': None},
                                              context, ex, request_spec)
            except Exception as ex:
                with excutils.save_and_reraise_exception():
                    self._set_vm_state_and_notify('run_instance',
                                                  {'vm_state': vm_states.ERROR,
                                                  'task_state': None},
                                                  context, ex, request_spec)

可以看到scheduler调用具体的driver去干具体的创建工作。这里的driver在配置文件里指定,小秦我这里是:scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler。这个对象的schedule_run_instance方法比较长,我们分开来看:

    def schedule_run_instance(self, context, request_spec,
                              admin_password, injected_files,
                              requested_networks, is_first_time,
                              filter_properties, legacy_bdm_in_spec):
        """This method is called from nova.compute.api to provision
        an instance.  We first create a build plan (a list of WeightedHosts)
        and then provision.

        Returns a list of the instances created.
        """
        payload = dict(request_spec=request_spec)
        self.notifier.info(context, 'scheduler.run_instance.start', payload)

        instance_uuids = request_spec.get('instance_uuids')
        LOG.info(_("Attempting to build %(num_instances)d instance(s) "
                    "uuids: %(instance_uuids)s"),
                  {'num_instances': len(instance_uuids),
                   'instance_uuids': instance_uuids})
        LOG.debug(_("Request Spec: %s") % request_spec)

首先是记录一些信息,然后发一个event出来。

        weighed_hosts = self._schedule(context, request_spec,
                                       filter_properties, instance_uuids)

这个方法很重要,就是通过这个方法我们可以获取到具体的host。方法比较长,分开来看:

    def _schedule(self, context, request_spec, filter_properties,
                  instance_uuids=None):
        """Returns a list of hosts that meet the required specs,
        ordered by their fitness.
        """
        elevated = context.elevated()
        instance_properties = request_spec['instance_properties']
        instance_type = request_spec.get("instance_type", None)

        update_group_hosts = self._setup_instance_group(context,
                filter_properties)

        config_options = self._get_configuration_options()

首先是获取一大堆的信息。这里的group小秦平时到是没有接触过,应该是可以用于指定这些虚机是否可以建立在同一台主机上还是必须分开啥的。
继续看下面的:

        # but if we've exceeded max retries... then we really only
        # have a single instance.
        properties = instance_properties.copy()
        if instance_uuids:
            properties['uuid'] = instance_uuids[0]
        self._populate_retry(filter_properties, properties)

        filter_properties.update({'context': context,
                                  'request_spec': request_spec,
                                  'config_options': config_options,
                                  'instance_type': instance_type})

        self.populate_filter_properties(request_spec,
                                        filter_properties)

这里是判断一个scheduler的次数。

        hosts = self._get_all_host_states(elevated)

        selected_hosts = []
        if instance_uuids:
            num_instances = len(instance_uuids)
        else:
            num_instances = request_spec.get('num_instances', 1)

这里获取需要创建的instance的个数和可用的host的列表。然后接下来就是一个循环了,用来找到合适的主机:

for num in xrange(num_instances):
            # Filter local hosts based on requirements ...
            hosts = self.host_manager.get_filtered_hosts(hosts,
                    filter_properties, index=num)
            if not hosts:
                # Can't get any more locally.
                break

            LOG.debug(_("Filtered %(hosts)s"), {'hosts': hosts})

            weighed_hosts = self.host_manager.get_weighed_hosts(hosts,
                    filter_properties)

            LOG.debug(_("Weighed %(hosts)s"), {'hosts': weighed_hosts})

            scheduler_host_subset_size = CONF.scheduler_host_subset_size
            if scheduler_host_subset_size > len(weighed_hosts):
                scheduler_host_subset_size = len(weighed_hosts)
            if scheduler_host_subset_size < 1:
                scheduler_host_subset_size = 1

            chosen_host = random.choice(
                weighed_hosts[0:scheduler_host_subset_size])
            selected_hosts.append(chosen_host)

            # Now consume the resources so the filter/weights
            # will change for the next instance.
            chosen_host.obj.consume_from_instance(instance_properties)
            if update_group_hosts is True:
                filter_properties['group_hosts'].add(chosen_host.obj.host)
        return selected_hosts

首先是get_filtered_hosts,这个就是对host进行一个过滤,把满足条件的host取出来。其代码是:

        filter_classes = self._choose_host_filters(filter_class_names)
        ignore_hosts = filter_properties.get('ignore_hosts', [])
        force_hosts = filter_properties.get('force_hosts', [])
        force_nodes = filter_properties.get('force_nodes', [])

        if ignore_hosts or force_hosts or force_nodes:
            # NOTE(deva): we can't assume "host" is unique because
            #             one host may have many nodes.
            name_to_cls_map = dict([((x.host, x.nodename), x) for x in hosts])
            if ignore_hosts:
                _strip_ignore_hosts(name_to_cls_map, ignore_hosts)
                if not name_to_cls_map:
                    return []
            # NOTE(deva): allow force_hosts and force_nodes independently
            if force_hosts:
                _match_forced_hosts(name_to_cls_map, force_hosts)
            if force_nodes:
                _match_forced_nodes(name_to_cls_map, force_nodes)
            if force_hosts or force_nodes:
                # NOTE(deva): Skip filters when forcing host or node
                if name_to_cls_map:
                    return name_to_cls_map.values()
            hosts = name_to_cls_map.itervalues()

        return self.filter_handler.get_filtered_objects(filter_classes,
                hosts, filter_properties, index)

这里的filter_classes是配置文件里的这个:

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

然后获取了filter_classes后则是对host进行一个最简单的过滤(比如如果用户指定了要在某台机子上运行,那么就看看有没有这台机,有的话就决定是他了)。最后调用self.filter_handler.get_filtered_objects:

class BaseFilterHandler(loadables.BaseLoader):
    """Base class to handle loading filter classes.

    This class should be subclassed where one needs to use filters.
    """

    def get_filtered_objects(self, filter_classes, objs,
            filter_properties, index=0):
        list_objs = list(objs)
        LOG.debug(_("Starting with %d host(s)"), len(list_objs))
        for filter_cls in filter_classes:
            cls_name = filter_cls.__name__
            filter = filter_cls()

            if filter.run_filter_for_index(index):
                objs = filter.filter_all(list_objs,
                                               filter_properties)
                if objs is None:
                    LOG.debug(_("Filter %(cls_name)s says to stop filtering"),
                          {'cls_name': cls_name})
                    return
                list_objs = list(objs)
                if not list_objs:
                    LOG.info(_("Filter %s returned 0 hosts"), cls_name)
                    break
                LOG.debug(_("Filter %(cls_name)s returned "
                            "%(obj_len)d host(s)"),
                          {'cls_name': cls_name, 'obj_len': len(list_objs)})
        return list_objs

可以看到逻辑其实非常简单,通过使用scheduler_default_filters配置文件参数所指定的那些filter,轮番对我们的host做过滤,然后返回都满足条件的host。
然后再往下看:

            weighed_hosts = self.host_manager.get_weighed_hosts(hosts,
                    filter_properties)
class BaseWeightHandler(loadables.BaseLoader):
    object_class = WeighedObject

    def get_weighed_objects(self, weigher_classes, obj_list,
            weighing_properties):
        """Return a sorted (descending), normalized list of WeighedObjects."""

        if not obj_list:
            return []

        weighed_objs = [self.object_class(obj, 0.0) for obj in obj_list]
        for weigher_cls in weigher_classes:
            weigher = weigher_cls()
            weights = weigher.weigh_objects(weighed_objs, weighing_properties)

            # Normalize the weights
            weights = normalize(weights,
                                minval=weigher.minval,
                                maxval=weigher.maxval)

            for i, weight in enumerate(weights):
                obj = weighed_objs[i]
                obj.weight += weigher.weight_multiplier() * weight

        return sorted(weighed_objs, key=lambda x: x.weight, reverse=True)

这里的weigher_classes是scheduler_weight_classes=nova.scheduler.weights.all_weighers。通过这个scheduler_weight_classes,我们可以把host计算出一个权重来。
再会过来看代码:

            scheduler_host_subset_size = CONF.scheduler_host_subset_size
            if scheduler_host_subset_size > len(weighed_hosts):
                scheduler_host_subset_size = len(weighed_hosts)
            if scheduler_host_subset_size < 1:
                scheduler_host_subset_size = 1

            chosen_host = random.choice(
                weighed_hosts[0:scheduler_host_subset_size])
            selected_hosts.append(chosen_host)

可以看到,最后我们的chosen_host还是有一定的随机性的。总之一个虚机最后会选出一个合适的虚机。最后几行代码比较简单啦,同步下数据库,告诉它这个host被选中了:

            # Now consume the resources so the filter/weights
            # will change for the next instance.
            chosen_host.obj.consume_from_instance(instance_properties)
            if update_group_hosts is True:
                filter_properties['group_hosts'].add(chosen_host.obj.host)
        return selected_hosts

现在我们有了N台虚机要去创建,也知道了哪些host是符合要求的,所以下面就是建立虚机了:

        for num, instance_uuid in enumerate(instance_uuids):
            request_spec['instance_properties']['launch_index'] = num

            try:
                try:
                    weighed_host = weighed_hosts.pop(0)
                    LOG.info(_("Choosing host %(weighed_host)s "
                                "for instance %(instance_uuid)s"),
                              {'weighed_host': weighed_host,
                               'instance_uuid': instance_uuid})
                except IndexError:
                    raise exception.NoValidHost(reason="")

                self._provision_resource(context, weighed_host,
                                         request_spec,
                                         filter_properties,
                                         requested_networks,
                                         injected_files, admin_password,
                                         is_first_time,
                                         instance_uuid=instance_uuid,
                                         legacy_bdm_in_spec=legacy_bdm_in_spec)

这里从符合要求的host中取出一台,然后创建虚拟机。核心代码是self._provision_resource。小秦猜测主要是调用libvirt来创建虚拟机。我们来看一下:

    def _provision_resource(self, context, weighed_host, request_spec,
            filter_properties, requested_networks, injected_files,
            admin_password, is_first_time, instance_uuid=None,
            legacy_bdm_in_spec=True):
        """Create the requested resource in this Zone."""
        # NOTE(vish): add our current instance back into the request spec
        request_spec['instance_uuids'] = [instance_uuid]
        payload = dict(request_spec=request_spec,
                       weighted_host=weighed_host.to_dict(),
                       instance_id=instance_uuid)
        self.notifier.info(context,
                           'scheduler.run_instance.scheduled', payload)

        # Update the metadata if necessary
        scheduler_hints = filter_properties.get('scheduler_hints') or {}
        try:
            updated_instance = driver.instance_update_db(context,
                                                         instance_uuid)
        except exception.InstanceNotFound:
            LOG.warning(_("Instance disappeared during scheduling"),
                        context=context, instance_uuid=instance_uuid)

        else:
            scheduler_utils.populate_filter_properties(filter_properties,
                    weighed_host.obj)

            self.compute_rpcapi.run_instance(context,
                    instance=updated_instance,
                    host=weighed_host.obj.host,
                    request_spec=request_spec,
                    filter_properties=filter_properties,
                    requested_networks=requested_networks,
                    injected_files=injected_files,
                    admin_password=admin_password, is_first_time=is_first_time,
                    node=weighed_host.obj.nodename,
                    legacy_bdm_in_spec=legacy_bdm_in_spec)

首先看这个:

            updated_instance = driver.instance_update_db(context,
                                                         instance_uuid)

具体的实现是:

def instance_update_db(context, instance_uuid, extra_values=None):
    """Clear the host and node - set the scheduled_at field of an Instance.

    :returns: An Instance with the updated fields set properly.
    """
    now = timeutils.utcnow()
    values = {'host': None, 'node': None, 'scheduled_at': now}
    if extra_values:
        values.update(extra_values)

    return db.instance_update(context, instance_uuid, values)

可以看到,这里先在数据库中建立对应的instance的记录。
继续看:

            scheduler_utils.populate_filter_properties(filter_properties,
                    weighed_host.obj)
def populate_filter_properties(filter_properties, host_state):
    """Add additional information to the filter properties after a node has
    been selected by the scheduling process.
    """
    if isinstance(host_state, dict):
        host = host_state['host']
        nodename = host_state['nodename']
        limits = host_state['limits']
    else:
        host = host_state.host
        nodename = host_state.nodename
        limits = host_state.limits

    # Adds a retry entry for the selected compute host and node:
    _add_retry_host(filter_properties, host, nodename)

    # Adds oversubscription policy
    if not filter_properties.get('force_hosts'):
        filter_properties['limits'] = limits

这里对filter的properties再添加一些额外的信息。
继续看代码:

            self.compute_rpcapi.run_instance(context,
                    instance=updated_instance,
                    host=weighed_host.obj.host,
                    request_spec=request_spec,
                    filter_properties=filter_properties,
                    requested_networks=requested_networks,
                    injected_files=injected_files,
                    admin_password=admin_password, is_first_time=is_first_time,
                    node=weighed_host.obj.nodename,
                    legacy_bdm_in_spec=legacy_bdm_in_spec)

可以看到,scheduler会的调用rpc来让compute建立虚拟机。来看下这个方法:

    def run_instance(self, ctxt, instance, host, request_spec,
                     filter_properties, requested_networks,
                     injected_files, admin_password,
                     is_first_time, node=None, legacy_bdm_in_spec=True):
        # NOTE(russellb) Havana compat
        version = self._get_compat_version('3.0', '2.37')
        instance_p = jsonutils.to_primitive(instance)
        msg_kwargs = {'instance': instance_p, 'request_spec': request_spec,
                      'filter_properties': filter_properties,
                      'requested_networks': requested_networks,
                      'injected_files': injected_files,
                      'admin_password': admin_password,
                      'is_first_time': is_first_time, 'node': node,
                      'legacy_bdm_in_spec': legacy_bdm_in_spec}

        cctxt = self.client.prepare(server=host, version=version)
        cctxt.cast(ctxt, 'run_instance', **msg_kwargs)

通过小秦之前博客中的RPC相关的文章,很容易就能知道这里就是调用我们scheduler得到的host的run_instance方法。
看下这个run_instance:

    @wrap_exception()
    @reverts_task_state
    @wrap_instance_event
    @wrap_instance_fault
    def run_instance(self, context, instance, request_spec,
                     filter_properties, requested_networks,
                     injected_files, admin_password,
                     is_first_time, node, legacy_bdm_in_spec):

        if filter_properties is None:
            filter_properties = {}

        @utils.synchronized(instance['uuid'])
        def do_run_instance():
            self._run_instance(context, request_spec,
                    filter_properties, requested_networks, injected_files,
                    admin_password, is_first_time, node, instance,
                    legacy_bdm_in_spec)
        do_run_instance()
    def _run_instance(self, context, request_spec,
                      filter_properties, requested_networks, injected_files,
                      admin_password, is_first_time, node, instance,
                      legacy_bdm_in_spec):
        """Launch a new instance with specified options."""

        extra_usage_info = {}

        def notify(status, msg="", fault=None, **kwargs):
            """Send a create.{start,error,end} notification."""
            type_ = "create.%(status)s" % dict(status=status)
            info = extra_usage_info.copy()
            info['message'] = unicode(msg)
            self._notify_about_instance_usage(context, instance, type_,
                    extra_usage_info=info, fault=fault, **kwargs)

        try:
            self._prebuild_instance(context, instance)

            if request_spec and request_spec.get('image'):
                image_meta = request_spec['image']
            else:
                image_meta = {}

            extra_usage_info = {"image_name": image_meta.get('name', '')}

            notify("start")  # notify that build is starting

            instance, network_info = self._build_instance(context,
                    request_spec, filter_properties, requested_networks,
                    injected_files, admin_password, is_first_time, node,
                    instance, image_meta, legacy_bdm_in_spec)
            notify("end", msg=_("Success"), network_info=network_info)

        except exception.RescheduledException as e:
            # Instance build encountered an error, and has been rescheduled.
            notify("error", fault=e)

        except exception.BuildAbortException as e:
            # Instance build aborted due to a non-failure
            LOG.info(e)
            notify("end", msg=unicode(e))  # notify that build is done

        except Exception as e:
            # Instance build encountered a non-recoverable error:
            with excutils.save_and_reraise_exception():
                self._set_instance_error_state(context, instance['uuid'])
                notify("error", fault=e)  # notify that build failed

先来看这个:

            self._prebuild_instance(context, instance)
    def _prebuild_instance(self, context, instance):
        self._check_instance_exists(context, instance)

        try:
            self._start_building(context, instance)
        except (exception.InstanceNotFound,
                exception.UnexpectedDeletingTaskStateError):
            msg = _("Instance disappeared before we could start it")
            # Quickly bail out of here
            raise exception.BuildAbortException(instance_uuid=instance['uuid'],
                    reason=msg)
    def _start_building(self, context, instance):
        """Save the host and launched_on fields and log appropriately."""
        LOG.audit(_('Starting instance...'), context=context,
                  instance=instance)
        self._instance_update(context, instance['uuid'],
                              vm_state=vm_states.BUILDING,
                              task_state=None,
                              expected_task_state=(task_states.SCHEDULING,
                                                   None))
    def _instance_update(self, context, instance_uuid, **kwargs):
        """Update an instance in the database using kwargs as value."""

        instance_ref = self.conductor_api.instance_update(context,
                                                          instance_uuid,
                                                          **kwargs)
        if (instance_ref['host'] == self.host and
                self.driver.node_is_available(instance_ref['node'])):
            rt = self._get_resource_tracker(instance_ref.get('node'))
            rt.update_usage(context, instance_ref)

        return instance_ref

可以看到,这里主要是更新一下instance的状态。
然后看这个:

            instance, network_info = self._build_instance(context,
                    request_spec, filter_properties, requested_networks,
                    injected_files, admin_password, is_first_time, node,
                    instance, image_meta, legacy_bdm_in_spec)

来看这个_build_instance,这个方法也挺长的,关键的其实是这几行代码:

                instance = self._spawn(context, instance, image_meta,
                                       network_info, block_device_info,
                                       injected_files, admin_password,
                                       set_access_ip=set_access_ip)

而self._spawn的核心是:

            self.driver.spawn(context, instance, image_meta,
                              injected_files, admin_password,
                              network_info,
                              block_device_info)

在小秦的例子里,这里的driver是libvirt的driver,具体的spawn方法是:

    # NOTE(ilyaalekseyev): Implementation like in multinics
    # for xenapi(tr3buchet)
    def spawn(self, context, instance, image_meta, injected_files,
              admin_password, network_info=None, block_device_info=None):
        disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
                                            instance,
                                            block_device_info,
                                            image_meta)
        self._create_image(context, instance,
                           disk_info['mapping'],
                           network_info=network_info,
                           block_device_info=block_device_info,
                           files=injected_files,
                           admin_pass=admin_password)
        xml = self.to_xml(context, instance, network_info,
                          disk_info, image_meta,
                          block_device_info=block_device_info,
                          write_to_disk=True)

        self._create_domain_and_network(context, xml, instance, network_info,
                                        block_device_info)
        LOG.debug(_("Instance is running"), instance=instance)

        def _wait_for_boot():
            """Called at an interval until the VM is running."""
            state = self.get_info(instance)['state']

            if state == power_state.RUNNING:
                LOG.info(_("Instance spawned successfully."),
                         instance=instance)
                raise loopingcall.LoopingCallDone()

        timer = loopingcall.FixedIntervalLoopingCall(_wait_for_boot)
        timer.start(interval=0.5).wait()

如果熟悉libvirt,那么这里的几个概念比如domain、xml啥的就知道肯定和libvirt有关啦。如果不熟悉的话可以看下这里。我们来看看这些代码:

        self._create_image(context, instance,
                           disk_info['mapping'],
                           network_info=network_info,
                           block_device_info=block_device_info,
                           files=injected_files,
                           admin_pass=admin_password)

这里是创建了一个image。这个代码小秦这里不看,以后分析openstack中的image的时候再看。

        xml = self.to_xml(context, instance, network_info,
                          disk_info, image_meta,
                          block_device_info=block_device_info,
                          write_to_disk=True)

如果看了小秦上面给的关于libvirt的链接,那么这个调用的含义就很清楚了,就是生成我们虚拟机的xml定义。

        self._create_domain_and_network(context, xml, instance, network_info,
                                        block_device_info)
        LOG.debug(_("Instance is running"), instance=instance)

这两句是核心啦。在小秦博客的HYPERVISOR类别下有讨论过create domain和define domain的区别,不过这里的create domain其实更加类似于define domain + start domain。其实这个方法的实现就是调用libvirt来做相关的事情,所以对libvirt有研究的人可以看下,否则只要知道在这里就是真的是最后的创建虚拟机了。最关键的代码如下:

        if xml:
            try:
                domain = self._conn.defineXML(xml)
            except Exception as e:
                LOG.error(_("An error occurred while trying to define a domain"
                            " with xml: %s") % xml)
                raise e

至此,我们的虚拟机就建立啦。

5.总结
根据上面小秦的跟踪,可以知道创建一个server的主要流程是:
1.查看body中adminPass是否指定
2.查看server的那么是否存在在body中
3.查看是否指定了具体的imageRef。如果没有指定的话看看有没有指定block_device_mapping或block_device_mapping_v2。
4.查看body中有没有指定personality,如果指定的话导入body中指定的config_drive
5.如果指定了personality,将这个项目下body中指定的需要被注入到虚拟机中的文件导入内存
6.查看是否指定了security_groups,如果没有指定则使用default这个group
7.获取body中network的值。这里network中可以有fixed ip、port之类的值,也可以只有uuid
8.查看body中有没有指定access ip
9.获取flavor的id
10.查看body中是否指定了key或user_data或availability_zone
11.对body中的block device做处理。
12.检查body中min_count和max_count是否存在,存在的话是否是有效值。
13.查看body是否设置了auto_disk_config这个字段。
14.查看对调度算法是否设置了hints,具体的参数是scheduler_hints
15.根据上面获取到的信息创建虚拟机
16.返回结果

其中第15步的详细过程是:
1.检查对应的用户是否有足够的权限创建虚拟机(通过policy.json文件)
2.查看neutron下的网络参数是否合理
3.检查是否提供了flavor id,没有提供的话使用默认的instance type
4.如果从image启动,那么获取一个glance的service类,同时获取image的元信息,通过这个类可以下载到具体的image。如果从volume启动,则获取volume的相关信息
5.通过image的信息,判定是否可以auto config
6.解析az参数的格式,如果是az:host:node,那么就解析成三个。如果是az::node,那么就解析两个。如果没有提供az这个参数的话,az就指定为CONF.default_schedule_zone。
7.获取block device的mapping关系
8.做一下和quota相关的检查
9.构造一个filter的字典
10.如果存在instance group的话,会做一些相关的操作。
11.在数据库中添加start状态的信息
12.调用scheduler,根据配置文件中的scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler这个driver,决定具体在哪来创建。具体的调度方法是先通过scheduler_default_filters参数中的filter对所有的host进行一个过滤,过滤出来的host再根据scheduler_weight_classes做一个权重的排序。最后对这些选出来的host的前scheduler_host_subset_size台中会做个随机的选择,选出一台作为我们的selected host
13.发送一个notice,告诉大家我要建立虚机了
14.在数据库中记录虚机的信息
15.调用rpc,让compute节点建立虚机(调用compute的run_instance方法)
16.compute调用具体的driver的方法创建虚机。主要就是把相关的网络、磁盘啥的建立好,然后根据配置文件生成一个xml文件,最后才是调用libvirt根据这个xml生成我们的虚拟机

Openstack Bookmark

你可能感兴趣的:(云计算)