Ubuntu-12.04 OpenStack的安装和部署手册
Release name | Release date | OpenStack Compute version number | OpenStack Object Storage version number |
Folsom | September 2012 | 2012.2 | 1.7.2 |
Essex | April 2012 | 2012.1 | 1.4.8 |
Diablo | October 2011 | 2011.3 | 1.4.3 |
Cactus | April 2011 | 2011.2 | 1.3.0 |
Bexar | March 2011 | 2011.1 | 1.2.0 |
Austin | October 2010 | 0.9.0 | 1.0.0 |
Service name | Code name |
Identity | Keystone |
Compute | Nova |
Image | Glance |
Dashboard | Horizon |
Object Storage | Swift |
Volumes | Cinder |
Networking | Quantum |
|
-
安装认证服务(keystone)。
-
配置认证服务。
-
安装镜像服务(Glance)。
-
配置镜像服务。
-
安装计算节点(nova)。
-
配置计算FlatDHCP网络使用
192.168.100.0/24
作为固定的范围为我们的客户虚拟机上的桥命名BR100
。 -
创建并初始化的nova与MySQL数据库。
-
添加Glance。
-
安装OpenStack的控制台。
更换ubuntu 默认apt源为 163
- root@ubuntu-ops:~# cat /etc/apt/sources.list
- deb http://mirrors.163.com/ubuntu/ precise main restricted universe multiverse
- deb http://mirrors.163.com/ubuntu/ precise-security main restricted universe multiverse
- deb http://mirrors.163.com/ubuntu/ precise-updates main restricted universe multiverse
- deb http://mirrors.163.com/ubuntu/ precise-proposed main restricted universe multiverse
- deb http://mirrors.163.com/ubuntu/ precise-backports main restricted universe multiverse
- deb-src http://mirrors.163.com/ubuntu/ precise main restricted universe multiverse
- deb-src http://mirrors.163.com/ubuntu/ precise-security main restricted universe multiverse
- deb-src http://mirrors.163.com/ubuntu/ precise-updates main restricted universe multiverse
- deb-src http://mirrors.163.com/ubuntu/ precise-proposed main restricted universe multiverse
- deb-src http://mirrors.163.com/ubuntu/ precise-backports main restricted universe multiverse
- root@ubuntu-ops:~# apt-get update
配置NTP服务
- root@ubuntu-ops:~# apt-get install ntp
- root@ubuntu-ops:~# sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf
- root@ubuntu-ops:~# /etc/init.d/ntp restart
client 节点设置ntp服务器
- ntpdate 'controllernode ip'
- hwclock -w
添加openstack ubuntu 源
- root@ubuntu-ops:~# echo -e "deb http://ppa.launchpad.net/openstack-ubuntu-testing/folsom-trunk-testing/ubuntu precise main \n deb-src http://ppa.launchpad.net/openstack-ubuntu-testing/folsom-trunk-testing/ubuntu precise main" > /etc/apt/sources.list.d/folsom.list
- root@ubuntu-ops:~# sudo apt-get install ubuntu-cloud-keyring ###安装key
安装mysql 服务并配置相应的数据库(设置mysql 密码为:ihaveu)
- root@ubuntu-ops:~# apt-get install mysql-server python-mysqldb
- sed -i 's/bind-address.*/bind-address = 0.0.0.0/g' /etc/mysql/my.cnf
- service msyql restart
安装keystone服务
- root@ubuntu-ops:~# apt-get install keystone python-keystone
创建keystone 数据库
- mysql -u root -p'ihaveu' -e "create database keystone;" ##本次用户 为root
修改keystone配置文件
- admin_token = admin
- connection = mysql://root:ihaveu@localhost/keystone #数据库链接
- keystone的默认token是ADMIN,我这里修改admin
- 默认是采用sqlite连接,我们需要改成mysql
重启keystone服务
- root@ubuntu-ops:/etc/keystone# service keystone restart
生成keystone 数据库表
- root@ubuntu-ops:/etc/keystone# keystone-manage db_sync
添加keystone 环境变量
- root@ubuntu-ops:/etc/keystone# echo -e 'export SERVICE_ENDPOINT="http://192.168.10.23:35357/v2.0"\nexport SERVICE_TOKEN=admin' >> /etc/profile
- root@ubuntu-ops:/etc/keystone# source /etc/profile
服务管理
身份服务管理的两个主要概念有:
-
服务
-
端点
也保持了用户的认证服务,对应于每一个服务(例如,一个用户名为nova,compute)及特别服务tenant,这就是所谓 servie
添加用户 租户 角色 举例说明:
- 创建用户 acme
- root@ubuntu-ops:/home/liming# keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com
- 创建一个租户acme
- root@ubuntu-ops:/home/liming# keystone tenant-create --name=acme
- 创建一个角色
- root@ubuntu-ops:/home/liming# keystone role-create --name=compute-user
- 添加用户的角色.租户信息
- root@ubuntu-ops:/home/liming# keystone user-list
- +----------------------------------+-------+---------+-------------------+
- | id | name | enabled | email |
- +----------------------------------+-------+---------+-------------------+
- | 32063772dbea4e92bf800d2bcf6664ff | alice | True | [email protected] |
- +----------------------------------+-------+---------+-------------------+
- root@ubuntu-ops:/home/liming# keystone role-list
- +----------------------------------+--------------+
- | id | name |
- +----------------------------------+--------------+
- | 1eeab262539f40788fb49bfb49985f73 | compute-user |
- +----------------------------------+--------------+
- root@ubuntu-ops:/home/liming# keystone tenant-list
- +----------------------------------+------+---------+
- | id | name | enabled |
- +----------------------------------+------+---------+
- | aa30c6c4c8e7453689e5ab7a03248121 | acme | True |
- +----------------------------------+------+---------+
- root@ubuntu-ops:/home/liming# keystone user-role-add --user-id=32063772dbea4e92bf800d2bcf6664ff --role-id 1eeab262539f40788fb49bfb49985f73 --tenant-id=aa30c6c4c8e7453689e5ab7a03248121
添加用户,角色,租户 用脚本,内容如下:
- #!/bin/bash
- #Auth:Liming
- #Date:2012.10.17
- # Tenant User Roles
- # ------------------------------------------------------------------
- # admin admin admin
- # service glance admin
- # service nova admin, [ResellerAdmin (swift only)]
- # service quantum admin # if enabled
- # service swift admin # if enabled
- # demo admin admin
- # demo demo Member, anotherrole
- # invisible_to_admin demo Member
- # Variables set before calling this script:
- # SERVICE_TOKEN - aka admin_token in keystone.conf
- # SERVICE_ENDPOINT - local Keystone admin endpoint
- # SERVICE_TENANT_NAME - name of tenant containing service accounts
- # ENABLED_SERVICES - stack.sh's list of services to start
- # DEVSTACK_DIR - Top-level DevStack directory
- KEYSTONE_CONF=${KEYSTONE_CONF:-/etc/keystone/keystone.conf}
- if [[ -r "$KEYSTONE_CONF" ]]; then
- EC2RC="$(dirname "$KEYSTONE_CONF")/ec2rc"
- else
- KEYSTONE_CONF=""
- EC2RC="ec2rc"
- fi
- #Password
- echo "Pleasse input your password:"
- read -p "(Default password: ihaveu ):" DEFAULT_PASSWORD
- if [ "$DEFAULT_PASSWORD" = "" ]; then
- DEFAULT_PASSWORD=ihaveu
- fi
- echo "================================="
- echo password = "$DEFAULT_PASSWORD"
- echo "================================="
- ADMIN_PASSWORD=${ADMIN_PASSWORD:-$DEFAULT_PASSWORD}
- echo "Installing .......
- function get_id () {
- echo `"$@" | grep ' id ' | awk '{print $4}'`
- }
- #Tenants
- ADMIN_TENANT=$(get_id keystone tenant-create --name admin)
- SERVICE_TENANT=$(get_id keystone tenant-create --name service)
- DEMO_TENANT=$(get_id keystone tenant-create --name demo)
- INVIS_TENANT=$(get_id keystone tenant-create --name invisible_to_admin)
- #Users
- DEMO_USER=$(get_id keystone user-create --name demo --pass $ADMIN_PASSWORD --email [email protected])
- ADMIN_USER=$(get_id keystone user-create --name admin --pass $ADMIN_PASSWORD --email [email protected])
- NOVA_USER=$(get_id keystone user-create --name nova --pass $ADMIN_PASSWORD --email [email protected])
- GLANCE_USER=$(get_id keystone user-create --name glance --pass $ADMIN_PASSWORD --email [email protected])
- SWIFT_USER=$(get_id keystone user-create --name swift --pass $ADMIN_PASSWORD --email [email protected])
- QUANTUM_USER=$(get_id keystone user-create --name quantum --pass $ADMIN_PASSWORD --email [email protected])
- #Roles
- ADMIN_ROLE=$(get_id keystone role-create --name admin)
- MEMBER_ROLE=$(get_id keystone role-create --name Member)
- KEYSTONEADMIN_ROLE=$(get_id keystone role-create --name KeystoneAdmin)
- KEYSTONESERVICE_ROLE=$(get_id keystone role-create --name KeystoneServiceAdmin)
- SYSADMIN_ROLE=$(get_id keystone role-create --name sysadmin)
- NETADMIN_ROLE=$(get_id keystone role-create --name netadmin)
- # Adding Roles to Users in Tenants
- #keystone user-role-add --user-id $USER_ID --role-id $ROLE_ID --tenant_id $TENANT_ID
- keystone user-role-add --user-id $ADMIN_USER --role-id $ADMIN_ROLE --tenant_id $ADMIN_TENANT
- keystone user-role-add --user-id $ADMIN_USER --role-id $MEMBER_ROLE --tenant_id $ADMIN_TENANT
- keystone user-role-add --user_id $ADMIN_USER --role_id $ADMIN_ROLE --tenant_id $DEMO_TENANT
- keystone user-role-add --user_id $DEMO_USER --role_id $MEMBER_ROLE --tenant_id $DEMO_TENANT
- keystone user-role-add --user_id $DEMO_USER --role_id $SYSADMIN_ROLE --tenant_id $DEMO_TENANT
- keystone user-role-add --user_id $DEMO_USER --role_id $NETADMIN_ROLE --tenant_id $DEMO_TENANT
- keystone user-role-add --user_id $DEMO_USER --role_id $MEMBER_ROLE --tenant_id $INVIS_TENANT
- keystone user-role-add --user-id $NOVA_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
- keystone user-role-add --user_id $GLANCE_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
- keystone user-role-add --user_id $SWIFT_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
- keystone user-role-add --user_id $QUANTUM_USER --role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
- # TODO(termie): these two might be dubious
- keystone user-role-add --user_id $ADMIN_USER --role_id $KEYSTONEADMIN_ROLE --tenant_id $ADMIN_TENANT
- keystone user-role-add --user_id $ADMIN_USER --role_id $KEYSTONESERVICE_ROLE --tenant_id $ADMIN_TENANT
- # Creating Services
- #keystone service-create --name service_name --type service_type --description 'Description of the service'
- NOVA_SERVICE=$(get_id keystone service-create --name nova --type compute --description 'OpenStack Compute Service')
- VOLUME_SERVICE=$(get_id keystone service-create --name volume --type volume --description 'OpenStack Volume Service')
- GLANCE_SERVICE=$(get_id keystone service-create --name glance --type p_w_picpath --description 'OpenStack Image Service')
- SWIFT_SERVICE=$(get_id keystone service-create --name swift --type object-store --description 'OpenStack Storage Service')
- KEYSTONE_SERVICE=$(get_id keystone service-create --name keystone --type identity --description 'OpenStack Identity Service')
- EC2_SERVICE=$(get_id keystone service-create --name ec2 --type ec2 --description 'EC2 Compatibility Layer')
- QUANTUM_SERVICE=$(get_id keystone service-create --name quantum --type network --description "Quantum Service")
- keystone service-create --name "horizon" --type dashboard --description "OpenStack Dashboard"
- # Creating Endpoints
- #keystone endpoint-create --region region_name --service_id service_id --publicurl public_url --adminurl admin_url --internalurl internal_url
- keystone endpoint-create --region myregion --service_id $NOVA_SERVICE --publicurl 'http://192.168.10.23:8774/v2/$(tenant_id)s' --adminurl 'http://192.168.10.23:8774/v2/$(tenant_id)s' --internalurl 'http://192.168.10.23:8774/v2/$(tenant_id)s'
- keystone endpoint-create --region myregion --service_id $VOLUME_SERVICE --publicurl 'http://192.168.10.23:8776/v1/$(tenant_id)s' --adminurl 'http://192.168.10.23:8776/v1/$(tenant_id)s' --internalurl 'http://192.168.10.23:8776/v1/$(tenant_id)s'
- keystone endpoint-create --region myregion --service_id $GLANCE_SERVICE --publicurl 'http://192.168.10.23:9292/v1' --adminurl 'http://192.168.10.23:9292/v1' --internalurl 'http://192.168.10.23:9292/v1'
- keystone endpoint-create --region myregion --service_id $SWIFT_SERVICE --publicurl 'http://192.168.10.23:8080/v1/AUTH_$(tenant_id)s' --adminurl 'http://192.168.10.23:8080/v1' --internalurl 'http://192.168.10.23:8080/v1/AUTH_$(tenant_id)s'
- keystone endpoint-create --region myregion --service_id $KEYSTONE_SERVICE --publicurl 'http://192.168.10.23:5000/v2.0' --adminurl 'http://192.168.10.23:35357/v2.0' --internalurl 'http://192.168.10.23:5000/v2.0'
- keystone endpoint-create --region myregion --service_id $EC2_SERVICE --publicurl 'http://192.168.10.23:8773/services/Cloud' --adminurl 'http://192.168.10.23:8773/services/Admin' --internalurl 'http://192.168.10.23:8773/services/Cloud'
- keystone endpoint-create --region myregion --service_id $QUANTUM_SERVICE --publicurl 'http://192.168.10.23:9696' --adminurl 'http://192.168.10.23:9696' --internalurl 'http://192.168.10.23:9696'
- # create ec2 creds and parse the secret and access key returned
- RESULT=$(keystone ec2-credentials-create --tenant_id=$ADMIN_TENANT --user_id=$ADMIN_USER)
- ADMIN_ACCESS=`echo "$RESULT" | grep access | awk '{print $4}'`
- ADMIN_SECRET=`echo "$RESULT" | grep secret | awk '{print $4}'`
- RESULT=$(keystone ec2-credentials-create --tenant_id=$DEMO_TENANT --user_id=$DEMO_USER)
- DEMO_ACCESS=`echo "$RESULT" | grep access | awk '{print $4}'`
- DEMO_SECRET=`echo "$RESULT" | grep secret | awk '{print $4}'`
- # write the secret and access to ec2rc
- cat > $EC2RC <
- ADMIN_ACCESS=$ADMIN_ACCESS
- ADMIN_SECRET=$ADMIN_SECRET
- DEMO_ACCESS=$DEMO_ACCESS
- DEMO_SECRET=$DEMO_SECRET
- EOF
- echo ""
- echo "user tenants role has been created !"
- export OS_USERNAME=admin
- export OS_PASSWORD=ihaveu
- export OS_TENANT_NAME=admin
- export OS_AUTH_URL="http://192.168.10.23:35357/v2.0"
- source /etc/profile
安装OpenStack Compute and Image service
安装glance 服务
- root@ubuntu-ops:~# sudo apt-get install glance glance-api glance-common python-glanceclient glance-registry python-glance
删除glance sqlite数据库,并修改glance 连接到mysql数据库
- root@ubuntu-ops:~# rm -rf /var/lib/glance/glance.sqlite
创建glance 数据库
- mysql -u root -p ihaveu -e "create database glance;"
修改glance配置
在/etc/glance/glance-api-paste.ini 配置 [filter:authtoken]
- admin_tenant_name = service
- admin_user = glance
- admin_password = ihaveu
修改 glance-api.conf 如下
- [keystone_authtoken]
- auth_host = 192.168.10.23
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = ihaveu
- [paste_deploy]
- config_file = /etc/glance/glance-api-paste.ini
- flavor=keystone
修改数据库连接 /etc/glance/glance-registry.conf,如下
- sql_connection = mysql://root:ihaveu@localhost/glance
- [keystone_authtoken]
- auth_host = 192.168.10.23
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = glance
- admin_password = ihaveu
- [paste_deploy]
- config_file = /etc/glance/glance-registry-paste.ini
- flavor=keystone
重启glance-registry 使配置生效
- root@ubuntu-ops:/etc/glance# service glance-registry restart
同步glance 数据库
- root@ubuntu-ops:/etc/glance# glance-manage version_control 0
- root@ubuntu-ops:/etc/glance# glance-manage db_sync
- /usr/lib/python2.7/dist-packages/glance/db/sqlalchemy/migrate_repo/versions/003_add_disk_format.py:46: SADeprecationWarning: useexisting is deprecated. Use extend_existing.useexisting=True)
重启 glance-registry and glance-api 服务
- root@ubuntu-ops:/etc/glance# service glance-registry restart && service glance-api restart
验证glance 之一:
root@ubuntu-ops:/etc/glance# glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
看到这里..glance 已经安装完毕了! 继续 ....
如有问题看日志
/var/log/glance/api.log or /var/log/glance/registry.log
验证glance之二:
root@ubuntu-ops:/etc/glance# mkdir /tmp/p_w_picpath
root@ubuntu-ops:/etc/glance# cd /tmp/p_w_picpath
root@ubuntu-ops:/tmp/p_w_picpath# wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz^C
root@ubuntu-ops:/tmp/p_w_picpath# tar -zxvf ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz ^C
root@ubuntu-ops:/tmp/p_w_picpath# cp /home/liming/ubuntu-12.04-server-cloudimg-amd64-disk1.img ./
root@ubuntu-ops:/tmp/p_w_picpath# glance add name="Ubuntu-12.04" disk_format=qcow2 container_format=ovf is_public=true < ubuntu-12.04-server-cloudimg-amd64-disk1.img
Added new p_w_picpath with ID: 708a1732-a191-4595-8488-9ba8d5149ce0 ##added new 表示添加完成
root@ubuntu-ops:/tmp/p_w_picpath# glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
708a1732-a191-4595-8488-9ba8d5149ce0 Ubuntu-12.04 qcow2 ovf 230490112
root@ubuntu-ops:/tmp/p_w_picpath# du -sh
220M .
这样看到的话表示glance完全没有问题
配置虚拟机管理程序KVM
root@ubuntu-ops:/tmp/p_w_picpath# sudo apt-get install -y bridge-utils kvm
测试KVM
- root@ubuntu-ops:/etc/nova# kvm-ok
- INFO: /dev/kvm exists
- KVM acceleration can be used
如果KVM不存在,显示如下
- INFO: Your CPU does not support KVM extensions
- KVM acceleration can NOT be used
如果出现以上不支持,看看bios是否开启VT,cpu支持不支持vmx,如果是amd CPU 是否支持svm ,可以用以下命令查看
- root@ubuntu-ops:/etc/nova# egrep '(vmx|svm)' --color=always /proc/cpuinfo
如若为空.表示都不支持..so 我的机器就不支持.现在只有通过qemu 或者 xen 去使用了
配置网卡信息
- root@ubuntu-ops:/tmp/p_w_picpath# cat /etc/network/interfaces
- auto lo
- iface lo inet loopback
- # The primary network interface
- auto eth1
- iface eth1 inet static
- address 192.168.10.23
- netmask 255.255.255.0
- broadcast 192.168.10.255
- gateway 192.168.10.1
- dns-nameserver 8.8.8.8
- auto eth0
- iface eth0 inet static
- address 10.10.10.2
- netmask 255.255.255.0
- network 10.10.10.0
- broadcast 10.10.10.255
- auto br100
- iface br100 inet static
- address 192.168.100.1
- netmask 255.255.255.0
- bridge_stp off
- bridge_fd 0
重启网卡
- root@ubuntu-ops:/tmp/p_w_picpath# /etc/init.d/networking restart
设置桥接网卡,在nova.conf中设置flat_network_bridge=br100
root@ubuntu-ops:/etc/nova# brctl addbr br100
root@ubuntu-ops:/etc/nova# brctl show #查看桥接网卡
创建nova创建库
- root@ubuntu-ops:~# mysql -uroot -pihaveu -e "create database nova;"
安装消息队列服务
- root@ubuntu-ops:~# sudo apt-get install -y rabbitmq-server
安装nova 服务
- root@ubuntu-ops:~# sudo apt-get install -y nova-compute nova-volume nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler nova-network
修改nova.conf配置如下
root@ubuntu-ops:/etc/nova# cat nova.conf
- [DEFAULT]
- # LOGS/STATE
- verbose=True
- logdir=/var/log/nova
- state_path=/var/lib/nova
- lock_path=/var/lock/nova
- # AUTHENTICATION
- auth_strategy=keystone
- # SCHEDULER
- compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
- # VOLUMES
- volume_driver=nova.volume.driver.ISCSIDriver
- volume_group=nova-volumes
- volume_name_template=volume-%08x
- iscsi_helper=tgtadm
- iscsi_ip_prefix=192.168.10.23
- iscsi_ip_address=192.168.10.23
- iscsi_port=3260
- # DATABASE
- sql_connection=mysql://root:ihaveu@localhost/nova
- # COMPUTE
- connection_type=libvirt
- libvirt_type=qemu
- compute_driver=libvirt.LibvirtDriver
- instance_name_template=instance-%08x
- api_paste_config=/etc/nova/api-paste.ini
- allow_resize_to_same_host=True
- # APIS
- osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
- ec2_dmz_host=192.168.10.23
- s3_host=192.168.10.23
- # RABBITMQ
- rabbit_host=192.168.10.23
- # GLANCE
- p_w_picpath_service=nova.p_w_picpath.glance.GlanceImageService
- glance_api_servers=192.168.10.23:9292
- # NETWORK
- network_manager=nova.network.manager.FlatDHCPManager
- force_dhcp_release=True
- dhcpbridge_flagfile=/etc/nova/nova.conf
- firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
- # Change my_ip to match each host
- my_ip=192.168.10.23
- public_interface=br100
- vlan_interface=eth1
- flat_network_bridge=br100
- flat_interface=eth1
- fixed_range=192.168.100.0/24
- # NOVNC CONSOLE
- novncproxy_base_url=http://192.168.10.23:6080/vnc_auto.html
- # Change vncserver_proxyclient_address and vncserver_listen to match each compute host
- vncserver_proxyclient_address=192.168.10.23
- vncserver_listen=192.168.10.23
- [keystone_authtoken]
- auth_host = 192.168.10.23
- auth_port = 35357
- auth_protocol = http
- auth_uri = http://192.168.10.23:5000/
- admin_tenant_name = service
- admin_user = nova
- admin_password = ihaveu
修改nova-compute.conf 为qemu (因为kvm不能用)
- [DEFAULT]
- libvirt_type=qemu
修改api-paste.ini 如下:
- [filter:authtoken]
- paste.filter_factory = keystone.middleware.auth_token:filter_factory
- auth_host = 192.168.10.23
- auth_port = 35357
- auth_protocol = http
- admin_tenant_name = service
- admin_user = nova
- admin_password = ihaveu
- signing_dirname = /tmp/keystone-signing-nova
创建nova数据表
- root@ubuntu-ops:/etc/nova# sudo nova-manage db sync
- 2012-10-22 11:21:07 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.migration' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/migration.pyc'> from (pid=29622) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:494
以上出来的是DEBUG ,可以忽略
如果看到不成功的日志,可以去看/var/log/nova/ nova-manange.log 去排除错误,如果出现以上debug 表示已经创建成功
创建nova-volumes 卷组
- root@ubuntu-ops:~# sudo pvcreate /dev/sdb
- Physical volume "/dev/sdb" successfully created
- root@ubuntu-ops:~# sudo vgcreate nova-volumes /dev/sdb
- Volume group "nova-volumes" successfully created
启动 nova 服务:
- cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done
创建虚拟机网络ip
- nova-manage network create private --fixed_range_v4=192.168.100.1/24 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=32
or
- nova-manage network create private --multi_host=T --fixed_range_v4=192.168.100.0/24 --bridge_interface=br100 --num_networks=1 --network_size=256
验证nova服务
- root@ubuntu-ops:~# nova-manage service list
- Binary Host Zone Status State Updated_At
- nova-consoleauth ubuntu-ops nova enabled :-) 2012-10-22 10:37:01
- nova-cert ubuntu-ops nova enabled :-) 2012-10-22 10:37:01
- nova-scheduler ubuntu-ops nova enabled :-) 2012-10-22 10:37:03
- nova-compute ubuntu-ops nova enabled :-) 2012-10-22 10:37:05
- nova-network ubuntu-ops nova enabled :-) 2012-10-22 10:36:58
- nova-volume ubuntu-ops nova enabled :-) 2012-10-22 10:37:02
如果返回的全是笑脸 证明安装没有问题,如果出现三个X 则表明某个服务没有安装成功 ,如果你安装的服务不在同一台机器上,在另一台机器上服务开启的,但是没有同步过来.你需要看下ntp服务器设置
可以用nova-manage 命令查看到你安装的版本为Folsom
- root@ubuntu-ops:~# nova-manage version list
- 2012.2 (2012.2-LOCALBRANCH:LOCALREVISION)
- root@ubuntu-ops:~# nova p_w_picpath-list #ihaveu
- Please set a password for your new keyring
- Password (again):
- Error: Your passwords didn't match
- Please set a password for your new keyring
- Password (again):
- Please input your password for the keyring
- +--------------------------------------+--------------+--------+--------+
- | ID | Name | Status | Server |
- +--------------------------------------+--------------+--------+--------+
- | 708a1732-a191-4595-8488-9ba8d5149ce0 | Ubuntu-12.04 | ACTIVE | |
- +--------------------------------------+--------------+--------+--------+
注册虚拟机:
列出nova secgroup 组
- root@ubuntu-ops:~# nova secgroup-list
- Please input your password for the keyring #这里要输入你nova p_w_picpath-list设置的密码
- +---------+-------------+
- | Name | Description |
- +---------+-------------+
- | default | default |
- +---------+-------------+
创建允许所有22端口出去的规则
- root@ubuntu-ops:~# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
- Please input your password for the keyring
- +-------------+-----------+---------+-----------+--------------+
- | IP Protocol | From Port | To Port | IP Range | Source Group |
- +-------------+-----------+---------+-----------+--------------+
- | tcp | 22 | 22 | 0.0.0.0/0 | |
- +-------------+-----------+---------+-----------+--------------+
允许icmp ping
- root@ubuntu-ops:~# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
- Please input your password for the keyring
- +-------------+-----------+---------+-----------+--------------+
- | IP Protocol | From Port | To Port | IP Range | Source Group |
- +-------------+-----------+---------+-----------+--------------+
- | icmp | -1 | -1 | 0.0.0.0/0 | |
- +-------------+-----------+---------+-----------+--------------+
创建密钥对
- root@ubuntu-ops:~# nova keypair-add --pub_key ~/.ssh/id_rsa.pub mykey
- Please input your password for the keyring
- root@ubuntu-ops:~# nova keypair-list #列出密钥
- Please input your password for the keyring
- +-------+-------------------------------------------------+
- | Name | Fingerprint |
- +-------+-------------------------------------------------+
- | mykey | 4d:c9:a5:ba:43:6f:dc:c0:a6:d0:21:e6:82:ac:8b:4c |
- +-------+-------------------------------------------------+
启动一个实例
列出现有的模板类型
- root@ubuntu-ops:~# nova flavor-list
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
- | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | extra_specs |
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
- | 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True | {} |
- | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | {} |
- | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | {} |
- | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | {} |
- | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | {} |
- +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
列出现有的镜像文件
- root@ubuntu-ops:~# nova p_w_picpath-list
- Please input your password for the keyring
- \+--------------------------------------+--------------+--------+--------+
- | ID | Name | Status | Server |
- +--------------------------------------+--------------+--------+--------+
- | 708a1732-a191-4595-8488-9ba8d5149ce0 | Ubuntu-12.04 | ACTIVE | |
- +--------------------------------------+--------------+--------+--------+
使用nova 创建地一个虚拟机
- root@ubuntu-ops:~# nova boot --flavor 1 --p_w_picpath 708a1732-a191-4595-8488-9ba8d5149ce0 --key_name mykey --security_group default ihaveu
- Please input your password for the keyring
- +-------------------------------------+----------------------------------------------------------+
- | Property | Value |
- +-------------------------------------+----------------------------------------------------------+
- | OS-DCF:diskConfig | MANUAL |
- | OS-EXT-SRV-ATTR:host | ubuntu-ops |
- | OS-EXT-SRV-ATTR:hypervisor_hostname | ubuntu-ops |
- | OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
- | OS-EXT-STS:power_state | 0 |
- | OS-EXT-STS:task_state | scheduling |
- | OS-EXT-STS:vm_state | building |
- | accessIPv4 | |
- | accessIPv6 | |
- | adminPass | 8FxmnesjB7Fn |
- | config_drive | |
- | created | 2012-10-22T11:27:21Z |
- | flavor | m1.tiny |
- | hostId | ff32bc3f07c7bbd0b06c65436aaa13150d3cb2be04acf05e628d80fa |
- | id | 618cbea9-79cf-438a-aded-dcb6ecf633b0 |
- | p_w_picpath | Ubuntu-12.04 |
- | key_name | mykey |
- | metadata | {} |
- | name | ihaveu |
- | progress | 0 |
- | security_groups | [{u'name': u'default'}] |
- | status | BUILD |
- | tenant_id | b739aa09ec3f4691afb34462d8f1da8d |
- | updated | 2012-10-22T11:27:22Z |
- | user_id | 2a8dea4d0b694079ba06b6a123e38e5b |
- +-------------------------------------+----------------------------------------------------------+
安装OpenStack的控制台 Dashboard
下面是创建OpenStack的dashboard的所有步骤。
安装OpenStack的dashboard框架,包括Apache和相关模块。。
- root@ubuntu-ops:~# apt-get install -y memcached libapache2-mod-wsgi openstack-dashboard
修改dashboard 配置文件
在TEMPLATE_DEBUG = DEBUG 下添加
QUANTUM_ENABLED = False
设置dashboard 数据库为mysql
创建dashboard数据库
- root@ubuntu-ops:~# mysql -uroot -pihaveu -e "create database horizon;"
修改dashboard 数据库为mysql
- DATABASES = {
- 'default': {
- 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
- 'NAME': 'horizon', # Or path to database file if using sqlite3.
- 'USER': 'root', # Not used with sqlite3.
- 'PASSWORD': 'ihaveu', # Not used with sqlite3.
- 'HOST': '', # Set to empty string for localhost. Not used with sqlite3.
- 'PORT': '', # Set to empty string for default. Not used with sqlite3.
- }
- }
重启apache2服务 和memached服务
- root@ubuntu-ops:~# sudo service apache2 restart; sudo service memcached restart
访问openstack dashboard
http://192.168.10.23/horizon
用户名admin 密码是创建admin的时候的密码
不知道是我虚拟机的问题 还是版本的问题.我登录进去跟essex 版本的页面有些不同
这算是在虚拟机上安装完毕了~等下会找PC机 安装创建一些实例,把qemu都换成kvm
一路摸索而来~
error 1 :
2012-10-22 16:25:14 CRITICAL nova [-] Unexpected error while running command.
Command: sudo iptables-save -c -t filter
Exit code: 1
Stdout: ''
Stderr: 'sudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: 3 incorrect password attempts\n'
2012-10-22 16:25:14 TRACE nova Traceback (most recent call last):
2012-10-22 16:25:14 TRACE nova File "/usr/bin/nova-network", line 48, in
2012-10-22 16:25:14 TRACE nova service.wait()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 659, in wait
2012-10-22 16:25:14 TRACE nova _launcher.wait()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 192, in wait
2012-10-22 16:25:14 TRACE nova super(ServiceLauncher, self).wait()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 162, in wait
2012-10-22 16:25:14 TRACE nova service.wait()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait
2012-10-22 16:25:14 TRACE nova return self._exit_event.wait()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait
2012-10-22 16:25:14 TRACE nova return hubs.get_hub().switch()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch
2012-10-22 16:25:14 TRACE nova return self.greenlet.switch()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main
2012-10-22 16:25:14 TRACE nova result = function(*args, **kwargs)
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 132, in run_server
2012-10-22 16:25:14 TRACE nova server.start()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 398, in start
2012-10-22 16:25:14 TRACE nova self.manager.init_host()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1899, in init_host
2012-10-22 16:25:14 TRACE nova self.l3driver.initialize()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/l3.py", line 82, in initialize
2012-10-22 16:25:14 TRACE nova linux_net.init_host()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 552, in init_host
2012-10-22 16:25:14 TRACE nova add_snat_rule(ip_range)
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 542, in add_snat_rule
2012-10-22 16:25:14 TRACE nova iptables_manager.apply()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 345, in apply
2012-10-22 16:25:14 TRACE nova self._apply()
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 744, in inner
2012-10-22 16:25:14 TRACE nova retval = f(*args, **kwargs)
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 365, in _apply
2012-10-22 16:25:14 TRACE nova attempts=5)
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 948, in _execute
2012-10-22 16:25:14 TRACE nova return utils.execute(*cmd, **kwargs)
2012-10-22 16:25:14 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 198, in execute
2012-10-22 16:25:14 TRACE nova cmd=' '.join(cmd))
2012-10-22 16:25:14 TRACE nova ProcessExecutionError: Unexpected error while running command.
2012-10-22 16:25:14 TRACE nova Command: sudo iptables-save -c -t filter
2012-10-22 16:25:14 TRACE nova Exit code: 1
2012-10-22 16:25:14 TRACE nova Stdout: ''
2012-10-22 16:25:14 TRACE nova Stderr: 'sudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: 3 incorrect password attempts\n'
2012-10-22 16:25:14 TRACE nova
解决办法:
是由于nova用户没办法sudo过去,so 在/etc/sudoers 中添加
nova ALL=(ALL) NOPASSWD:ALL
本文参考:http://docs.openstack.org/trunk/openstack-compute/install/apt/content/