opennebula配置

一、Getting Started

OpenNebula provides six commands to interact with the system:

  • onevm: to submit, control and monitor virtual machines
  • onehost: to add, delete and monitor hosts
  • onevnet: to add, delete and monitor virtual networks
  • oneuser: to add, delete and monitor users
  • oneimage: to add, delete and control images
  • onecluster: to add, delete and control clusters
  • oneauth: tools to manage authentication and authorization

These are the options shared by some of the commands:

  • -l, –list x,y,z: Selects columns to display with list command.
  • –list-columns: Information about the columns available to display, order or filter.
  • -o, –order x,y,z: Order by these columns, column starting with - means decreasing order.
  • -f, –filter x,y,z: Filter data. An array is specified with column=value pairs.
  • -d, –delay seconds: Sets the delay in seconds for top command.
  • -h, –help: Shows help information.
  • –version: Shows version and copyright information.
  • -v, –verbose: Tells more information if the command is successful
  • -x, –xml: Returns xml instead of human readable text
  • -n, –no-hash: Store plain password into the database

Number ranges can also be specified this way:

  • [<start>-<end>]: generates numbers from start to end
  • [<start>+<count>]: generates a range that starts with the number provided and has count number of elements

If <start>'s first digit is 0 then it will pad the numbers generated with 0 to the same size as the last element in the range.

Example:

[9-11]:   9 10 11
[09-11]: 09 10 11
[8+3]: 8 9 10
[08+3]: 08 09 10

Man pages:

1.Building a Private Cloud

利用oneimage注册一个image。先建立一个image template,用来注册image:

NAME          = "Ubuntu Desktop"
PATH = /home/cloud/images/ubuntu-desktop/disk.0
PUBLIC = YES
DESCRIPTION = "Ubuntu 10.04 desktop for students."

 

$ oneimage register ubuntu.oneimg
$ oneimage list
ID USER NAME TYPE REGTIME PUB STAT #VMS
1 oneadmin Ubuntu Desktop OS Jul 11, 2010 15:17 Yes rdy 0

现在这个image就可以使用了。还要建立一个 virtual machine template,以便执行onevm时提交给vm:

CPU    = 1
MEMORY = 2056

DISK = [ image = "Ubuntu Desktop" ]

DISK = [ type = swap,
size = 1024 ]

NIC = [ NETWORK = "Public network" ]

确保在家目录,然后执行:

$ onevm create myfirstVM.template

如果成功,则返回一个ID,这个ID为监控和控制的基础。用list查看一下:

$ onevm list
ID USER NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin one-0 runn 0 65536 host01 00 0:00:02

迁移的命令如下:

$ onevm livemigrate 0 1


How the system operate:

OpenNebula does the following:

  • Manages Virtual Networks. Virtual networks interconnect VMs. Each Virtual Networks includes a description.
  • Creates VMs. The VM description is added to the database.
  • Deploys VMs. According to the allocation policy, the scheduler decides where to execute the VMs.
  • Manages VM Images. Images can be registered before execution. When submited, VM images are transferred to the host and swap disk images are created. After execution, VM images may be copied back to the repository.
  • Manages Running VMs. VM are started, periodically polled to get their consumption and state, and can be shutdown, suspended, stopped or migrated.

The main functional components of an OpenNebula Private Cloud are the following:

  • Hypervisor: Virtualization manager installed in the resources of the cluster that OpenNebula leverages for the management of the VMs within each host.
  • Virtual Infrastructure Manager: Centralized manager of VMs and resources, providing virtual network management, VM life-cycle management, VM image management and fault tolerance.
  • Scheduler: VM placement policies for balance of workload, server consolidation, placement constraints, affinity, advance reservation of capacity and SLA commitment.

Image Definition Template 2.2:

http://www.opennebula.org/documentation:rel2.2:img_template

Virtual Machine Definition File 2.2

http://www.opennebula.org/documentation:rel2.2:template

 

2.Planning the Installation

The basic components of an OpenNebula system are:

  • Front-end, executes the OpenNebula and cluster services.
  • Nodes, hypervisor-enabled hosts that provide the resources needed by the Virtual Machines.
  • Image repository, any storage medium that holds the base images of the VMs.
  • OpenNebula daemon, is the core service of the system. It manages the life-cycle of the VMs and orchestrates the cluster subsystems (network, storage and hypervisors)
  • Drivers, programs used by the core to interface with an specific cluster subsystem, e.g. a given hypervisor or storage file system.
  • oneadmin, is the administrator of the private cloud that performs any operation on the VMs, virtual networks, nodes or users.
  • Users, use the OpenNebula facilities to create and manage their own virtual machines and virtual networks.

安装所需依赖包见之前的安装文档。

 

The cluster front-end will export the image repository and the OpenNebula installation directory to the cluster nodes. The size of the image repository depends on the number of images (and size) you want to store. Also when you start a VM you will be usually cloning (copying) it, so you must be sure that there is enough space to store all the running VM images.

Create the following hierarchy in the front-end root file system:

  • /srv/cloud/one, will hold the OpenNebula installation and the clones for the running VMs
  • /srv/cloud/images, will hold the master images and the repository

 

$ tree /srv
/srv/
|
`-- cloud
|-- one
`-- images

Example: A 64 core cluster will typically run around 80VMs, each VM will require an average of 10GB of disk space. So you will need ~800GB for /srv/cloud/one, you will 
also want to store 10-15 master images so ~200GB for /srv/cloud/images. A 1TB /srv/cloud will be enough for this example setup.

Export /srv/cloud to all the cluster nodes. For example, if you have all your physical nodes in a local network with address 192.168.0.0/24 you will need to add to your /etc/exports file a line like this:

 

 

$ cat /etc/exports
/srv/cloud 192.168.0.0/255.255.255.0(rw)

也就是说在每个cluster node都创建/srv/cloud目录,然后从front-end挂载这个目录。

 

User Account

The Virtual Infrastructure is administered by the oneadmin account, this account will be used to run the OpenNebula services and to do regular administration and maintenance tasks.

Follow these steps:

  • Create the cloud group where OpenNebula administrator user will be:
    # groupadd cloud
  • Create the OpenNebula administrative account ( oneadmin), we will use OpenNebula directory as the home directory for this user:
    # useradd -d /srv/cloud/one -g cloud -m oneadmin
  • Get the user and group id of the OpenNebula administrative account. This id will be used later to create users in the cluster nodes with the same id:
    $ id oneadmin
    uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)
    In this case the user id will be 1001 and group also 1001.
  • Create the group account also on every node that run VMs. Make sure that its id is the same as in the frontend. In this example 1001:
    # groupadd --gid 1001 cloud
    # useradd --uid 1001 -g cloud -d /srv/cloud/one oneadmin

Network
a typical cluster node with two physical networks one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges:

 

$ brctl show
bridge name bridge id STP enabled interfaces
vbr0 8000.001e682f02ac no eth0
vbr1 8000.001e682f02ad no eth1

 

Secure Shell Access

You need to create ssh keys for oneadmin user and configure machines so it can connect to them using ssh without need for a password.

  • Generate oneadmin ssh keys:
    $ ssh-keygen
    When prompted for password press enter so the private key is not encripted.
  • Append the public key to ~/.ssh/authorized_keys to let oneadmin user log without the need to type a password. Do that also for the frontend:
    $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  • Many distributions (RHEL/CentOS for example) have permission requirements for the public key authentication to work:
    • .ssh/: 700
    • .ssh/id_dsa.pub: 600
    • .ssh/id_dsa: 600
    • .ssh/authorized_keys: 600
  • Tell ssh client to not ask before adding hosts to known_hosts file. This goes into ~/.ssh/config:
    $ cat ~/.ssh/config
    Host *
    StrictHostKeyChecking no

Platform Notes:

http://www.opennebula.org/documentation:rel2.2:notes

Storage Guide:

http://www.opennebula.org/documentation:rel2.2:sm

Networking Guide 2.2:

http://www.opennebula.org/documentation:rel2.2:nm

Managing Virtual Networks 2.2:

http://www.opennebula.org/documentation:rel2.2:vgg

Hypervisor

The virtualization technology installed in your cluster nodes, have to be configured so the oneadmin user can start, control and monitor VMs. This usually means the execution of commands with root privileges or making oneadmin part of a given group. Please take a look to the virtualization guide that fits your site:

3.Installation guide:

Self contained

Once the OpenNebula software is installed specifying a directory with the -d option, the next tree should be found under $ONE_LOCATION:

System wide

Once the OpenNebula software is installed without specifying a directory with the -d option, the next tree reflects the files installed:

4.Configuration Guide

OpenNebula Components

OpenNebula comprises the execution of three type of processes:

  • The OpenNebula daemon ( oned), to orchestrate the operation of all the modules and control the VM's life-cycle
  • The drivers to access specific cluster systems (e.g. storage or hypervisors)
  • The scheduler to take VM placement decisions

In this section you'll learn how to configure and start these services.

OpenNebula Daemon

The configuration file for the daemon is called oned.conf and it is placed inside the $ONE_LOCATION/etc directory (or in /etc/one if OpenNebula was installed system wide).

OpenNebula Daemon Configuration 2.2:

http://www.opennebula.org/documentation:rel2.2:oned_conf

The oned.conf file consists in the following sections:

  • General configuration attributes, like the time between cluster nodes and VM monitorization actions or the MAC prefix to be used. See more details...
  • Information Drivers, the specific adaptors that will be used to monitor cluster nodes. See more details...
  • Virtualization Drivers, the adaptors that will be used to interface the hypervisors. See more details...
  • Transfer Drivers, that are used to interface with the storage system to clone, delete or move VM images. See more details...
  • Image Repository, used to store images for virtual machines. See more details...
  • Hooks, that are executed on specific events, e.g. VM creation. See more details...

The following example will configure OpenNebula to work with KVM and a shared FS:

 

  
    
# Attributes
HOST_MONITORING_INTERVAL
= 60
VM_POLLING_INTERVAL
= 60

VM_DIR
= / srv / cloud / one / var #Path in the cluster nodes to store VM images

SCRIPTS_REMOTE_DIR
=/ tmp / one
DB
= [ backend = " sqlite " ]
VNC_BASE_PORT
= 5000

NETWORK_SIZE
= 254 # default
MAC_PREFIX
= " 00:03 "

DEFAULT_IMAGE_TYPE
= " OS "
DEFAULT_DEVICE_PREFIX
= " hd "

#Drivers
IM_MAD
= [name = " im_kvm " , executable = " one_im_ssh " , arguments = " kvm " ]
VM_MAD
= [name = " vmm_kvm " , executable = " one_vmm_sh " , arguments = " kvm,
default = " vmm_sh/vmm_sh_kvm.conf " , type = " kvm " ]
TM_MAD
= [name = " tm_nfs " , executable = " one_tm " , arguments = " tm_nfs/tm_nfs.conf " ]

 

Be sure that VM_DIR is set to the path where the front-end's $ONE_LOCATION/var directory is mounted in the cluster nodes. If this path is the same in the front-end and 
the cluster nodes (ie, the worker nodes mount $ONE_LOCATION/var in the same path as the front-end), then the VM_DIR variable is not needed.

Scheduler

The Scheduler module is in charge of the assignment between pending Virtual Machines and cluster nodes. OpenNebula's architecture defines this module as a separate process that can be started independently of oned. OpenNebula comes with a match making scheduler (mm_sched) that implements the Rank Scheduling Policy.

The goal of this policy is to prioritize those resources more suitable for the VM. You can configure several resource and load aware policies by simply specifying specific RANK expressions in the Virtual Machine definition files. Check the scheduling guide to learn how to configure the scheduler and make use of these policies.

Scheduling Policies 2.2:

http://www.opennebula.org/documentation:rel2.2:schg

Drivers

Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands (RC) files are sourced to optionally obtain environmental variables.

These two RC files are:

  • $ONE_LOCATION/etc/defaultrc. Global environment and tasks for all the drivers. Variables are defined using sh syntax, and upon read, exported to the driver's environment:
# Debug for MADs [0=ERROR, 1=DEBUG] 
# If set, MADs will generate cores and logs in $ONE_LOCATION/var.
ONE_MAD_DEBUG=
# Nice Priority to run the drivers
PRIORITY=19

 

Start & Stop OpenNebula

When you execute OpenNebula for the first time it will create an administration account. Be sure to put the user and password in a single line as user:password in the $ONE_AUTH file.

The OpenNebula daemon and the scheduler can be easily started with the $ONE_LOCATION/bin/one script. Just execute as the <oneadmin> user:

$ one start

OpenNebula by default truncates older logs. If you want to backup OpenNebula's main log, you may supply the -b option to automatically back it up:

$ one -b start

If you do not want to start the scheduler just use oned, check oned -h for options.

Now we should have running two process:

  • oned : Core process, attends the CLI requests, manages the pools and all the components
  • mm_sched : Scheduler process, in charge of the VM to cluster node matching

If those process are running, you should see content in their log files (log files are placed in /var/log/one/ if you installed OpenNebula system wide):

  • $ONE_LOCATION/var/oned.log
  • $ONE_LOCATION/var/sched.log

OpenNebula Users

There are two account types in the OpenNebula system:

  • The oneadmin account is created the first time OpenNebula is started using the ONE_AUTH data, see below. oneadmin has enough privileges to perform any operation on any object (virtual machine, network, host or user)
  • Regular user accounts must be created by <oneadmin> and they can only manage their own objects (virtual machines and networks)

 

Virtual Networks created by oneadmin are public and can be used by every other user.

OpenNebula users should have the following environment variables set:

 

ONE_AUTH Needs to point to a file containing just a single line stating “username:password”. If ONE_AUTH is not defined, $HOME/.one/one_auth will be used instead. If no auth file is present, OpenNebula cannot work properly, as this is needed by the core, the CLI, and the cloud components as well.
ONE_LOCATION If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here
ONE_XMLRPC http://localhost:2633/RPC2
PATH $ONE_LOCATION/bin:$PATH if self-contained. Otherwise this is not needed.

 

Adding and Deleting Users

User accounts within the OpenNebula system are managed by <oneadmin> with the oneuser utility. Users can be easily added to the system like this:

$ oneuser create helen mypass
In this case user helen should include the following content in the $ONE_AUTH file:
$ export ONE_AUTH="/home/helen/.one/one_auth"
$ cat $ONE_AUTH
helen:mypass

Users can be deleted by simply:

$ oneuser delete john

To list the users in the system just issue the command:

> oneuser list
UID NAME PASSWORD ENABLE
0 oneadmin c24783ba96a35464632a624d9f829136edc0175e True
1 paul e727d1464ae12436e899a726da5b2f11d8381b26 True
2 helen 34a91f713808846ade4a71577dc7963631ebae14 True

 

Detailed information of the oneuser utility can be found in the Command Line Reference

OpenNebula Hosts

Finally, the physical nodes have to be added to the system as OpenNebula hosts. Hosts can be added anytime with the onehost utility, like this:

 

$ onehost create host01 im_kvm vmm_kvm tm_nfs
$ onehost create host02 im_kvm vmm_kvm tm_nfs

 

Before adding a host check that you can ssh to it without being prompt for a password

Physical host monitoring is performed by probes, which are scripts designed to extract pieces of information from the host operating system. This scripts are copied to SCRIPTS_REMOTE_DIR (set in $ONE_LOCATION/etc/oned.conf) in the remote executing nodes upon host addition to the system (ie, upon “onehost create”), and they will be copied again if any probe is removed or added in the front-end ($ONE_LOCATION/lib/remotes). If any script is modified, or for any reason the administrator wants to force the probes to be copied again on a particular host, the “onehost sync” functionality can be used.

You can find a complete explanation in the guide for managing physical hosts and clusters, and the Command Line Interface reference.

Logging and Debugging

There are different log files corresponding to different OpenNebula components:

  • ONE Daemon: The core component of OpenNebula dumps all its logging information onto $ONE_LOCATION/var/oned.log. Its verbosity is regulated by DEBUG_LEVEL in $ONE_LOCATION/etc/oned.conf.
  • Scheduler: All the scheduler information is collected into the $ONE_LOCATION/var/sched.log file.
  • Virtual Machines: All VMs controlled by OpenNebula have their folder, $ONE_LOCATION/var/<VID>/ (or /var/lib/one/<VID> in a system wide installation). You can find the following information in it:
    • Log file : The information specific of the VM will be dumped in a file in this directory called vm.log. Note: These files are in /var/log/one if OpenNebula was installed system wide.
    • Deployment description files : Stored in deployment.<EXECUTION>, where <EXECUTION> is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
    • Transfer description files : Stored in transfer.<EXECUTION>.<OPERATION>, where <EXECUTION> is the sequence number in the execution history of the VM, <OPERATION> is the stage where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
    • Save images: Stored in images/ sub-directory, images are in the form disk.<id>.
    • Restore files : check-pointing information is also stored in this directory to restore the VM in case of failure. The state information is stored in a file called checkpoint.
  • Drivers: Each driver can have activated its ONE_MAD_DEBUG variable in their RC files (see the Drivers configuration section for more details). If so, error information will be dumped to $ONE_LOCATION/var/name-of-the-driver-executable.log; log information of the drivers is in oned.log.

 

 

 

 

 

 

 


你可能感兴趣的:(open)