一、Getting Started
OpenNebula provides six commands to interact with the system:
onevm
: to submit, control and monitor virtual machines
onehost
: to add, delete and monitor hosts
onevnet
: to add, delete and monitor virtual networks
oneuser
: to add, delete and monitor users
oneimage
: to add, delete and control images
onecluster
: to add, delete and control clusters
oneauth
: tools to manage authentication and authorization
These are the options shared by some of the commands:
Number ranges can also be specified this way:
[<start>-<end>]
: generates numbers from start to end
[<start>+<count>]
: generates a range that starts with the number provided and has count number of elements
If <start>'s first digit is 0 then it will pad the numbers generated with 0 to the same size as the last element in the range.
Example:
[9-11]: 9 10 11
[09-11]: 09 10 11
[8+3]: 8 9 10
[08+3]: 08 09 10
Man pages:
利用oneimage注册一个image。先建立一个image template,用来注册image:
NAME = "Ubuntu Desktop"
PATH = /home/cloud/images/ubuntu-desktop/disk.0
PUBLIC = YES
DESCRIPTION = "Ubuntu 10.04 desktop for students."
$ oneimage register ubuntu.oneimg
$ oneimage list
ID USER NAME TYPE REGTIME PUB STAT #VMS
1 oneadmin Ubuntu Desktop OS Jul 11, 2010 15:17 Yes rdy 0
现在这个image就可以使用了。还要建立一个 virtual machine template,以便执行onevm时提交给vm:
CPU = 1
MEMORY = 2056
DISK = [ image = "Ubuntu Desktop" ]
DISK = [ type = swap,
size = 1024 ]
NIC = [ NETWORK = "Public network" ]
$ onevm create myfirstVM.template
如果成功,则返回一个ID,这个ID为监控和控制的基础。用list查看一下:
$ onevm list
ID USER NAME STAT CPU MEM HOSTNAME TIME
0 oneadmin one-0 runn 0 65536 host01 00 0:00:02
迁移的命令如下:
$ onevm livemigrate 0 1
How the system operate:
OpenNebula does the following:
The main functional components of an OpenNebula Private Cloud are the following:
Image Definition Template 2.2:
http://www.opennebula.org/documentation:rel2.2:img_template
Virtual Machine Definition File 2.2
http://www.opennebula.org/documentation:rel2.2:template
The basic components of an OpenNebula system are:
安装所需依赖包见之前的安装文档。
The cluster front-end will export the image repository and the OpenNebula installation directory to the cluster nodes. The size of the image repository depends on the number of images (and size) you want to store. Also when you start a VM you will be usually cloning (copying) it, so you must be sure that there is enough space to store all the running VM images.
Create the following hierarchy in the front-end root file system:
/srv/cloud/one
, will hold the OpenNebula installation and the clones for the running VMs
/srv/cloud/images
, will hold the master images and the repository
$ tree /srv
/srv/
|
`-- cloud
|-- one
`-- images
Example: A 64 core cluster will typically run around 80VMs, each VM will require an average of 10GB of disk space. So you will need ~800GB for /srv/cloud/one, you will
also want to store 10-15 master images so ~200GB for /srv/cloud/images. A 1TB /srv/cloud will be enough for this example setup.
Export /srv/cloud
to all the cluster nodes. For example, if you have all your physical nodes in a local network with address 192.168.0.0/24 you will need to add to your /etc/exports file a line like this:
$ cat /etc/exports
/srv/cloud 192.168.0.0/255.255.255.0(rw)
也就是说在每个cluster node都创建/srv/cloud目录,然后从front-end挂载这个目录。
The Virtual Infrastructure is administered by the oneadmin
account, this account will be used to run the OpenNebula services and to do regular administration and maintenance tasks.
Follow these steps:
cloud
group where OpenNebula administrator user will be:
# groupadd cloud
oneadmin
), we will use OpenNebula directory as the home directory for this user:
# useradd -d /srv/cloud/one -g cloud -m oneadmin
$ id oneadminIn this case the user id will be 1001 and group also 1001.
uid=1001(oneadmin) gid=1001(cloud) groups=1001(cloud)
# groupadd --gid 1001 cloud
# useradd --uid 1001 -g cloud -d /srv/cloud/one oneadmin
Network
a typical cluster node with two physical networks one for public IP addresses (attached to eth0
NIC) and the other for private virtual LANs (NIC eth1
) should have two bridges:
$ brctl show
bridge name bridge id STP enabled interfaces
vbr0 8000.001e682f02ac no eth0
vbr1 8000.001e682f02ad no eth1
You need to create ssh
keys for oneadmin
user and configure machines so it can connect to them using ssh
without need for a password.
oneadmin
ssh
keys:
$ ssh-keygenWhen prompted for password press enter so the private key is not encripted.
~/.ssh/authorized_keys
to let
oneadmin
user log without the need to type a password. Do that also for the frontend:
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
known_hosts
file. This goes into
~/.ssh/config
:
$ cat ~/.ssh/config
Host *
StrictHostKeyChecking no
Platform Notes:
http://www.opennebula.org/documentation:rel2.2:notes
Storage Guide:
http://www.opennebula.org/documentation:rel2.2:sm
Networking Guide 2.2:
http://www.opennebula.org/documentation:rel2.2:nm
Managing Virtual Networks 2.2:
The virtualization technology installed in your cluster nodes, have to be configured so the oneadmin
user can start, control and monitor VMs. This usually means the execution of commands with root privileges or making oneadmin
part of a given group. Please take a look to the virtualization guide that fits your site:
3.Installation guide:
Once the OpenNebula software is installed specifying a directory with the -d option, the next tree should be found under $ONE_LOCATION
:
Once the OpenNebula software is installed without specifying a directory with the -d option, the next tree reflects the files installed:
4.Configuration Guide
OpenNebula comprises the execution of three type of processes:
oned
), to orchestrate the operation of all the modules and control the VM's life-cycle
In this section you'll learn how to configure and start these services.
The configuration file for the daemon is called oned.conf
and it is placed inside the $ONE_LOCATION/etc
directory (or in /etc/one
if OpenNebula was installed system wide).
OpenNebula Daemon Configuration 2.2:
http://www.opennebula.org/documentation:rel2.2:oned_conf
The oned.conf
file consists in the following sections:
The following example will configure OpenNebula to work with KVM and a shared FS:
# Attributes
HOST_MONITORING_INTERVAL = 60
VM_POLLING_INTERVAL = 60
VM_DIR = / srv / cloud / one / var #Path in the cluster nodes to store VM images
SCRIPTS_REMOTE_DIR =/ tmp / one
DB = [ backend = " sqlite " ]
VNC_BASE_PORT = 5000
NETWORK_SIZE = 254 # default
MAC_PREFIX = " 00:03 "
DEFAULT_IMAGE_TYPE = " OS "
DEFAULT_DEVICE_PREFIX = " hd "
#Drivers
IM_MAD = [name = " im_kvm " , executable = " one_im_ssh " , arguments = " kvm " ]
VM_MAD = [name = " vmm_kvm " , executable = " one_vmm_sh " , arguments = " kvm,
default = " vmm_sh/vmm_sh_kvm.conf " , type = " kvm " ]
TM_MAD = [name = " tm_nfs " , executable = " one_tm " , arguments = " tm_nfs/tm_nfs.conf " ]
Be sure that VM_DIR is set to the path where the front-end's $ONE_LOCATION/var directory is mounted in the cluster nodes. If this path is the same in the front-end and
the cluster nodes (ie, the worker nodes mount $ONE_LOCATION/var in the same path as the front-end), then the VM_DIR variable is not needed.
The Scheduler module is in charge of the assignment between pending Virtual Machines and cluster nodes. OpenNebula's architecture defines this module as a separate process that can be started independently of oned
. OpenNebula comes with a match making
scheduler (mm_sched) that implements the Rank Scheduling Policy.
The goal of this policy is to prioritize those resources more suitable for the VM. You can configure several resource and load aware policies by simply specifying specific RANK
expressions in the Virtual Machine definition files. Check the scheduling guide to learn how to configure the scheduler and make use of these policies.
Scheduling Policies 2.2:
http://www.opennebula.org/documentation:rel2.2:schg
Drivers are separate processes that communicate with the OpenNebula core using an internal ASCII protocol. Before loading a driver, two run commands (RC) files are sourced to optionally obtain environmental variables.
These two RC files are:
$ONE_LOCATION/etc/defaultrc
. Global environment and tasks for all the drivers. Variables are defined
using sh syntax, and upon read, exported to the driver's environment:
# Debug for MADs [0=ERROR, 1=DEBUG]
# If set, MADs will generate cores and logs in $ONE_LOCATION/var.
ONE_MAD_DEBUG=
# Nice Priority to run the drivers
PRIORITY=19
defaultrc
variables. Please see each driver's configuration guide for specific options:
$ONE_AUTH
file.
The OpenNebula daemon and the scheduler can be easily started with the $ONE_LOCATION/bin/one
script. Just execute as the <oneadmin>
user:
$ one start
OpenNebula by default truncates older logs. If you want to backup OpenNebula's main log, you may supply the -b option to automatically back it up:
$ one -b start
If you do not want to start the scheduler just use oned
, check oned -h
for options.
Now we should have running two process:
oned
: Core process, attends the CLI requests, manages the pools and all the components
mm_sched
: Scheduler process, in charge of the VM to cluster node matching
If those process are running, you should see content in their log files (log files are placed in /var/log/one/
if you installed OpenNebula system wide):
$ONE_LOCATION/var/oned.log
$ONE_LOCATION/var/sched.log
There are two account types in the OpenNebula system:
oneadmin
has enough privileges to perform any operation on any object (virtual machine, network, host or user)
<oneadmin>
and they
can only manage their own objects (virtual machines and networks)
oneadmin
are
public and can be used by every other user.
OpenNebula users should have the following environment variables set:
ONE_AUTH | Needs to point to a file containing just a single line stating “username:password”. If ONE_AUTH is not defined, $HOME/.one/one_auth will be used instead. If no auth file is present, OpenNebula cannot work properly, as this is needed by the core, the CLI, and the cloud components as well. |
---|---|
ONE_LOCATION | If OpenNebula was installed in self-contained mode, this variable must be set to <destination_folder>. Otherwise, in system wide mode, this variable must be unset. More info on installation modes can be found here |
ONE_XMLRPC | http://localhost:2633/RPC2 |
PATH | $ONE_LOCATION/bin :$PATH if self-contained. Otherwise this is not needed. |
User accounts within the OpenNebula system are managed by <oneadmin>
with the oneuser
utility. Users can be easily added to the system like this:
$ oneuser create helen mypassIn this case user
helen
should include the following content in the $ONE_AUTH file:
$ export ONE_AUTH="/home/helen/.one/one_auth"
$ cat $ONE_AUTH
helen:mypass
Users can be deleted by simply:
$ oneuser delete john
To list the users in the system just issue the command:
> oneuser list
UID NAME PASSWORD ENABLE
0 oneadmin c24783ba96a35464632a624d9f829136edc0175e True
1 paul e727d1464ae12436e899a726da5b2f11d8381b26 True
2 helen 34a91f713808846ade4a71577dc7963631ebae14 True
oneuser
utility can be found
in the Command Line Reference
Finally, the physical nodes have to be added to the system as OpenNebula hosts. Hosts can be added anytime with the onehost
utility, like this:
$ onehost create host01 im_kvm vmm_kvm tm_nfs
$ onehost create host02 im_kvm vmm_kvm tm_nfs
Physical host monitoring is performed by probes, which are scripts designed to extract pieces of information from the host operating system. This scripts are copied to SCRIPTS_REMOTE_DIR
(set in $ONE_LOCATION/etc/oned.conf
) in the remote executing nodes upon host addition to the system (ie, upon “onehost create”), and they will be copied again if any probe is removed or added in the front-end ($ONE_LOCATION/lib/remotes
). If any script is modified, or for any reason the administrator wants to force the probes to be copied again on a particular host, the “onehost sync” functionality can be used.
You can find a complete explanation in the guide for managing physical hosts and clusters, and the Command Line Interface reference.
There are different log files corresponding to different OpenNebula components:
$ONE_LOCATION/var/oned.log
. Its verbosity is regulated by DEBUG_LEVEL in
$ONE_LOCATION/etc/oned.conf
.
$ONE_LOCATION/var/<VID>/
(or
/var/lib/one/<VID>
in a system wide installation). You can find the following information in it:
/var/log/one
if OpenNebula was installed system wide.
deployment.<EXECUTION>
, where
<EXECUTION>
is the sequence number in the execution history of the VM (deployment.0 for the first host, deployment.1 for the second and so on).
transfer.<EXECUTION>.<OPERATION>
, where
<EXECUTION>
is the sequence number in the execution history of the VM,
<OPERATION>
is the stage where the script was used, e.g. transfer.0.prolog, transfer.0.epilog, or transfer.1.cleanup.
images/
sub-directory, images are in the form
disk.<id>
.
checkpoint
.
$ONE_LOCATION/var/name-of-the-driver-executable.log
; log information of the drivers is in
oned.log
.