openstack代码剖析(1)

http://www.sandywalsh.com/2012/04/openstack-nova-internals-pt1-overview.html

OpenStack Nova Internals – pt1 – Overview


For the last 18 months or so I've been working on the OpenStack Nova project as a core developer. Initially, the project was small enough that you could find your way around the code base pretty easily. Other than being PEP8 compliant there wasn't a whole lot of idioms you needed to follow to get your submissions accepted. But, as with any project, the deeper you get the more subtle the problems become. Consistently dealing with things like exception handling, concurrency, state management, synchronous vs. asynchronous operations and data partitioning becomes critical. As a core reviewer it gets harder and harder to remember all these rules and as a new contributor it's very intimidating trying to get your first submission accepted.

For that reason I thought I'd take a little detour from my normal blog articles and dive into some of the unwritten programming idioms behind the OpenStack project. But to start I'll quickly go over the OpenStack source code layout and basic architecture.

I'm going to assume you know about Cloud IaaS topics (image management, hypervisors, instance management, networking, etc.), Python (or you're a flexible enough programmer where the languages don't really matter) and Event-driven frameworks (aka Reactor Pattern).

Source Layout

Once you grab the Nova source you'll see it's pretty easy to understand the main code layout.
git clone https://github.com/openstack/nova.git
The actual code for the Nova services are in ./nova and the corresponding unit tests are in the related directory under./nova/tests. Here's a short explanation of the Nova source directory structure:


├── etc
│   └── nova
├── nova
│   ├── api - the Nova HTTP service
│   │   ├── ec2 - the Amazon EC2 API bindings
│   │   ├── metadata
│   │   └── openstack - the OpenStack API
│   ├── auth - authentication libraries
│   ├── common - shared Nova components
│   ├── compute - the Nova Compute service
│   ├── console - instance console library
│   ├── db - database abstraction
│   │   └── sqlalchemy
│   │         └── migrate_repo
│   │               └── versions - Schema migrations for SqlAlchemy
│   ├── network - the Nova Network service
│   ├── notifier - event notification library
│   ├── openstack - ongoing effort to reuse Nova parts with other OpenStack parts.
│   │   └── common
│   ├── rpc - Remote Procedure Call libraries for Nova Services
│   ├── scheduler - Nova Scheduler service
│   ├── testing
│   │   └── fake - “Fakes” for testing
│   ├── tests - Unit tests. Sub directories should mirror ./nova
│   ├── virt - Hypervisor abstractions
│   ├── vnc - VNC libraries for accessing Windows instances
│   └── volume - the Volume service
├── plugins - hypervisor host plugins. Mostly for XenServer.


Before we get too far into the source, you should have a good understanding of the architectural layout of Nova. OpenStack is a collection of Services. When I say a service I mean it's a process running on a machine. Depending on how large an OpenStack deployment you're shooting for, you many have just one of each service running (perhaps all on a single box) or you can run many of them across many boxes.

The core OpenStack services are: APIComputeScheduler and Network. You'll also need the Glance Image Service for guest OS images (which could be backed by the Swift Storage Service). We'll dive into each of these services later, but for now all we need to know is what their jobs are. API is the HTTP interface into Nova. Compute talks to the hypervisor running each host (usually one Compute service per host). Network manages the IP address pool as well as talking to switches, routers, firewalls and related devices. The Scheduler selects the most appropriate Compute node from the available pool for new instances (although it may also be used for picking Volumes).

The database is not a Nova service per se. The database can be accessed directly from any Nova service (although it should not be accessed from the Compute service ... something we're working on cleaning up). Should a Compute node be compromised by a bad guest we don't want it getting to the database.

You may also run a stand-alone Authentication service (like Keystone) or the Volume service for disk management, but it's not mandatory.

OpenStack Nova uses AMQP (specifically RabbitMQ) as the communication bus between the services. AMQP messages are written to special queues and one of the related services pick them off for processing. This is how Nova scales. If you find a single Compute node can't handle the number of requests coming in, you can throw another Compute node service into the mix. Same with the other services.

If AMQP is the only way to communicate with the Services, how do the users issue commands? The answer is the API service. This is an HTTP service (a WSGI application in Python-speak). The API service listens for REST commands on the HTTP service and translates them into AMQP messages for the services. Likewise, responses from the services come in via AMQP and the API service turns them into valid HTTP Responses in the format the user requested. OpenStack currently speaks EC2 (the Amazon API) and OpenStack (a variant of the Rackspace API). We'll get into the gory details of the API service in a later post.

But it's not just API that talks to the services. Services can also talk to each other. Compute may need to talk to Network and Volume to get necessary resources. If we're not careful about how we organize the source code, all of this communication could get a little messy. So, for our inaugural article, let's dive into the Service and RPC mechanism.

Notation

I'll be using the Python unittest notation for modules, methods and functions. Specifically,

nova.compute.api:API.run_instance equates to the run_instance method of the API class in the ./nova/compute/api.py file. Likewise, nova.compute.api.do_something refers to the do_something function in the ./nova/compute/api.py file.

Talking to a Service

With the exception of the API service, each Nova service must have a related Python module to handle RPC command marshalling. For example:

The network service has ./nova/network/api.py
The compute service has ./nova/compute/api.py
The scheduler service has ./nova/scheduler/api.py

... you get the idea. These modules are usually just large collections of functions that make the service do something. But sometimes they contain classes that have methods to do something. It all depends if we need to intercept the service calls sometimes. We'll touch on these use cases later.

The Scheduler service nova.scheduler.api has perhaps the most simple interface on it, consisting of a handful of functions.

Network is pretty straightforward with a single API class, although it could have been implemented with functions because I don't think there are any derivations yet.

Compute has an interesting class hierarchy for call marshaling, like this:
BaseAPI -> API -> AggregateAPI
and BaseAPI -> HostAPI

But, most importantly, nova.compute.api.API is the main work horse and we'll get into the other derivations another day.

So, if I want to pause a running instance I would import nova.compute.api, instantiate the API class and call the pause()method on it. This will marshal up the parameters and send it to the Compute service that manages that instance by writing it to the correct AMQP queue. Finding the correct AMQP topic for that compute service is done with a quick database lookup for that instance, which happens via the nova.compute.api:BaseAPI._cast_or_call_compute_message method. With other services it may be as simple as importing the related api module and calling a function directly.

Casts vs. Calls

AMQP is not a proper RPC mechanism, but we can get RPC-like functionality from it relatively easily. In nova.rpc.__init__there are two calls to handle this cast() and call()cast() performs an asynchronous invocation on a service, while call()expects a return value and therefore is a synchronous operation. What call() really does under the hood is it dynamically creates a ephemeral AMQP topic for the return message from the service. It then waits in an eventlet green-thread until the response is received.

Exceptions can also be sent through this response topic and regenerated/rethrown on the callers side if the exception is derived from nova.exception:NovaException. Otherwise, a nova.rpc.common:RemoteError is thrown.

Ideally, we should only be performing asynchronous cast()'s to services since call()'s are obviously more expensive. Be careful on which one you choose. Try not to be dependant on return values if at all possible. Also, try to make your service functions idempotent if possible since it may get wrapped up in some retry code down the road.

If you're really interested in how the rpc-over-amqp stuff works, look at nova.rpc.impl_kombu

Fail-Fast Architecture

OpenStack is a fail-fast architecture. What that means is if a request does not succeed it will throw an exception which will likely bubble all the way up to the caller. But, since each OpenStack Nova service is an operation in eventlet, it generally won't destroy any threads or leave the system in a funny state. A new request can come in via AMQP or HTTP and get processed just as easily. Unless we are doing things that require explicit clean-up it's generally ok to not be too paranoid a programmer. If you're expecting a certain value in a dictionary, it's ok to let a KeyError bubble up. You don't always need to derive a unique exception for every error condition ... even in the worst case, the WSGI middleware will convert it into something the client can handle. That's one of the nice things about threadless/event-driven programming. We'll get more into Nova's error handling in later posts.

Well, that sort of explains about Nova's source code layout and how services talk to each other. Next time we'll dig into service managers and drivers to see how services are implemented on the callee side.

你可能感兴趣的:(openstack代码剖析(1))