Openstack G版本 Ubuntu13.04三节点实验记录


1.准备阶段

特别提醒:

本篇文档参考了官网文档(http://docs.openstack.org/)github(https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst)longgeek配置文档(http://longgeek.com/2013/03/31/openstack-grizzly-multinode-deployment-in-ubuntu-12-04/),还有请教了openstack群不少大侠,再此一一谢过!


所需设备:

物理机一台8G内存,windows2003sp2操作系统,workstation9ubuntu13.0464位)镜像


网络设置:

Control node:eth0(10.10.10.51),eth1(172.16.10.200)

Network node:eth0(10.10.10.52),eth1(10.20.20.52),eth2(172.16.10.201)

Computenode:eth0(10.10.10.55),eth1(10.10.20.55)

外部网络:172.16.10.0/24(上网业务技外界登陆openstack

管理网络:10.10.10.0/24(三节点之间通信比如:keystone)认证,rabbitmq消息队列

业务网络:10.20.20.0/24(网络节点和计算节点中虚拟机数据通信比如:dpcp,l2,l3

拓扑图:

注意:由于用虚拟机测试,每个虚拟机2G内存,我的外部网络用桥接网段,管理和业务网络分别用vmnet2vmnet3,另外由于计算节点没有外部地址不能下载软件包,可以增加一个nat网络,安装完毕后可删除,还有其他办法比如官网文档把计算节点的网关设置成网络节点的ip,网络节点nat代理计算节点上网,这些都不影响实验结果。


安装步骤:


2Control node


2.1 准备ubuntu

添加grizzly

apt-get install -y ubuntu-cloud-keyring

echo debhttp://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main>> /etc/apt/sources.list.d/grizzly.list

更新系统

apt-get update -y

apt-get upgrade -y

apt-get dist-upgrade –y


2.2网络配置

#cat /etc/network/interfaces

auto eth0

iface eth0 inet static

address 10.10.10.51

netmask 255.255.255.0

Restart the networking service:

auto eth1

iface eth1 inet static

address 172.16.10.200

netmask 255.255.255.0

gateway 172.16.10.254

dns-nameservers 172.16.10.5


重启网络服务

service networking restart


2.3安装 MySQL

安装 MySQL:

apt-get install -y mysql-serverpython-mysqldb


配置myasl接受所有请求

sed -i 's/127.0.0.1/0.0.0.0/g'/etc/mysql/my.cnf

service mysql restart


创建数据库

mysql -u root -p

#Keystone

CREATE DATABASE keystone;

GRANT ALL ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'keystone';

#Glance

CREATE DATABASE glance;

GRANT ALL ON glance.* TO 'glance'@'%'IDENTIFIED BY 'glance';

#Quantum

CREATE DATABASE quantum;

GRANT ALL ON quantum.* TO 'quantum'@'%'IDENTIFIED BY 'quantum';

#Nova

CREATE DATABASE nova;

GRANT ALL ON nova.* TO 'nova'@'%'IDENTIFIED BY 'nova';

#Cinder

CREATE DATABASE cinder;

GRANT ALL ON cinder.* TO 'cinder'@'%'IDENTIFIED BY 'cinder';

quit;


2.4 RabbitMQ

Install RabbitMQ:

apt-get install -y rabbitmq-server


Install NTP service:

apt-get install -y ntp


2.5. Others


Install other services:

apt-get install -y vlan bridge-utils


Enable IP_Forwarding:

sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

# To save you from rebooting, perform thefollowing

sysctl net.ipv4.ip_forward=1


#sysct l –p使其立即生效


2.6 Keystone

安装keystone

#apt-get install –y keystone

修改 /etc/keystone/keystone.conf 数据库配置

connection =mysql://keystoneUser:[email protected]/keystone

重启keystone服务器,同步数据库

service keystone restart

keystone-manage db_sync


用脚本填充数据库,可以从网上下载,根据自己的情况需要改IP地址Password,脚本的作用是新建租户,用户,服务侦听端口等,下载地址如下:

wget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_basic.sh


wget https://raw.github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/OVS_MultiNode/KeystoneScripts/keystone_endpoints_basic.sh


脚本内容如下:

root@control:~# catkeystone_endpoints_basic.sh

#!/bin/sh

#

# Keystone basic Endpoints


# Mainly inspired byhttps://github.com/openstack/keystone/blob/master/tools/sample_data.sh


# Modified by Bilel Msekni / InstitutTelecom

#

# Support: [email protected]

# License: Apache Software License (ASL)2.0

#


# Host address

HOST_IP=10.10.10.51

EXT_HOST_IP=172.16.10.200


# MySQL definitions

MYSQL_USER=keystone

MYSQL_DATABASE=keystone

MYSQL_HOST=$HOST_IP

MYSQL_PASSWORD=keystone


# Keystone definitions

KEYSTONE_REGION=RegionOne

export SERVICE_TOKEN=ADMIN

export SERVICE_ENDPOINT="http://${HOST_IP}:35357/v2.0"


while getopts"u:D:p:m:K:R:E:T:vh" opt; do

case $opt in

u)

MYSQL_USER=$OPTARG

;;

D)

MYSQL_DATABASE=$OPTARG

;;

p)

MYSQL_PASSWORD=$OPTARG

;;

m)

MYSQL_HOST=$OPTARG

;;

K)

MASTER=$OPTARG

;;

R)

KEYSTONE_REGION=$OPTARG

;;

E)

export SERVICE_ENDPOINT=$OPTARG

;;

T)

export SERVICE_TOKEN=$OPTARG

;;

v)

set -x

;;

h)

cat <

Usage: $0 [-m mysql_hostname] [-umysql_username] [-D mysql_database] [-p mysql_password]

[-K keystone_master ] [ -R keystone_region ] [ -E keystone_endpoint_url]

[ -T keystone_token ]


Add -v for verbose mode, -h to display thismessage.

EOF

exit 0

;;

\?)

echo "Unknown option -$OPTARG" >&2

exit 1

;;

:)

echo "Option -$OPTARG requires an argument" >&2

exit 1

;;

esac

done


if [ -z "$KEYSTONE_REGION" ];then

echo "Keystone region not set. Please set with -R option or setKEYSTONE_REGION variable." >&2

missing_args="true"

fi


if [ -z "$SERVICE_TOKEN" ]; then

echo "Keystone service token not set. Please set with -T option orset SERVICE_TOKEN variable." >&2

missing_args="true"

fi


if [ -z "$SERVICE_ENDPOINT" ];then

echo "Keystone service endpoint not set. Please set with -E optionor set SERVICE_ENDPOINT variable." >&2

missing_args="true"

fi


if [ -z "$MYSQL_PASSWORD" ]; then

echo "MySQL password not set. Please set with -p option or setMYSQL_PASSWORD variable." >&2

missing_args="true"

fi


if [ -n "$missing_args" ]; then

exit 1

fi


keystone service-create --name nova --typecompute --description 'OpenStack Compute Service'

keystone service-create --name cinder--type volume --description 'OpenStack Volume Service'

keystone service-create --name glance--type p_w_picpath --description 'OpenStack Image Service'

keystone service-create --name keystone--type identity --description 'OpenStack Identity'

keystone service-create --name ec2 --typeec2 --description 'OpenStack EC2 service'

keystone service-create --name quantum--type network --description 'OpenStack Networking service'


create_endpoint () {

case $1 in

compute)

keystone endpoint-create --region$KEYSTONE_REGION --service-id $2 --publicurl'http://'"$EXT_HOST_IP"':8774/v2/$(tenant_id)s' --adminurl'http://'"$HOST_IP"':8774/v2/$(tenant_id)s' --internalurl'http://'"$HOST_IP"':8774/v2/$(tenant_id)s'

;;

volume)

keystone endpoint-create --region $KEYSTONE_REGION --service-id $2--publicurl 'http://'"$EXT_HOST_IP"':8776/v1/$(tenant_id)s'--adminurl 'http://'"$HOST_IP"':8776/v1/$(tenant_id)s' --internalurl'http://'"$HOST_IP"':8776/v1/$(tenant_id)s'

;;

p_w_picpath)

keystone endpoint-create --region $KEYSTONE_REGION --service-id $2--publicurl 'http://'"$EXT_HOST_IP"':9292/' --adminurl'http://'"$HOST_IP"':9292/' --internalurl 'http://'"$HOST_IP"':9292/'

;;

identity)

keystone endpoint-create --region $KEYSTONE_REGION --service-id $2--publicurl 'http://'"$EXT_HOST_IP"':5000/v2.0' --adminurl'http://'"$HOST_IP"':35357/v2.0' --internalurl'http://'"$HOST_IP"':5000/v2.0'

;;

ec2)

keystone endpoint-create --region $KEYSTONE_REGION --service-id $2--publicurl 'http://'"$EXT_HOST_IP"':8773/services/Cloud' --adminurl'http://'"$HOST_IP"':8773/services/Admin' --internalurl'http://'"$HOST_IP"':8773/services/Cloud'

;;

network)

keystone endpoint-create --region $KEYSTONE_REGION --service-id $2--publicurl 'http://'"$EXT_HOST_IP"':9696/' --adminurl'http://'"$HOST_IP"':9696/' --internalurl'http://'"$HOST_IP"':9696/'

;;

esac

}


for i in compute volume p_w_picpath object-storeidentity ec2 network; do

id=`mysql -h "$MYSQL_HOST" -u "$MYSQL_USER"-p"$MYSQL_PASSWORD" "$MYSQL_DATABASE" -ss -e "SELECTid FROM service WHERE type='"$i"';"` || exit 1

create_endpoint $i $id

done


设置环境变量,否则你用keyston命令行查询要带很多参数不方便

root@control:~# cat creds

#Paste the following:

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=123456

exportOS_AUTH_URL="http://172.16.10.200:5000/v2.0/"

root@control:~# source creds


检查keystone结果

root@control:~# keystone user-list

+----------------------------------+---------+---------+--------------------+

| id | name | enabled | email |

+----------------------------------+---------+---------+--------------------+

| 546b18d85b9a4bf8b548bd08e8ecfe87 | admin | True | [email protected] |

| a0dbcb1c75814ab285ea0ddc4a156dd6 | cinder | True | [email protected] |

| 1a860d4cd8244bb3bc19e9cfe8259e60 | demo | True | [email protected] |

| 08725b7243854901bb0835be1e3a8c5e | glance | True | [email protected] |

| 1dcb939697e04229ae14abe02fce6d6f | nova | True | [email protected] |

| 5e447437acc148d88e386989d62da44d |quantum | True | [email protected] |

+----------------------------------+---------+---------+--------------------+

root@control:~# keystone endpoint-list

+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+

| id | region | publicurl | internalurl | adminurl | service_id |

+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+

| 12eac4b2ed91404f93f2235cbaa446f3 |RegionOne | http://172.16.10.200:9292/ | http://10.10.10.51:9292/ | http://10.10.10.51:9292/ | 1372321775df4b6c9d894d299412acc5 |

| 37c6d6ce5e954f449cac46194ea077d0 |RegionOne | http://172.16.10.200:8776/v1/$(tenant_id)s | http://10.10.10.51:8776/v1/$(tenant_id)s| http://10.10.10.51:8776/v1/$(tenant_id)s | af1af90fcd954faa906b8c567f060d0d |

| 3e9f2ce578e248b5945a099f69141312 |RegionOne | http://172.16.10.200:8773/services/Cloud | http://10.10.10.51:8773/services/Cloud | http://10.10.10.51:8773/services/Admin | 46be108df1084f5e9a2702ecfd517aa3 |

| 5240aec5803b4d7094b66dfa4ecd6c55 |RegionOne | http://172.16.10.200:9696/ | http://10.10.10.51:9696/ | http://10.10.10.51:9696/ | 6e4dacf1c7984a7baa216aeec2e5831d |

| 8128afab6f034d03820704fe8d7fc817 |RegionOne | http://172.16.10.200:5000/v2.0 | http://10.10.10.51:5000/v2.0 | http://10.10.10.51:35357/v2.0 | a68ed6a40998477abc0990bd56dcbd86 |

| 960af7efa1b84a778fb6c40a6015a497 |RegionOne | http://172.16.10.200:8774/v2/$(tenant_id)s |http://10.10.10.51:8774/v2/$(tenant_id)s |http://10.10.10.51:8774/v2/$(tenant_id)s | 117539645e29490f8244d4bfc2ed64bf |

+----------------------------------+-----------+--------------------------------------------+------------------------------------------+------------------------------------------+----------------------------------+



2.7 Glance

安装glance

apt-get install -y glance

更新/etc/glance/glance-api-paste.ini

[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

delay_auth_decision = true

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = 123456


更新/etc/glance/glance-registry-paste.ini

[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = glance

admin_password = 123456


更新/etc/glance/glance-api.conf


sql_connection =mysql://glance:[email protected]/glance


[paste_deploy]

flavor = keystone


更新/etc/glance/glance-registry.conf


sql_connection =mysql://glance:[email protected]/glance


[paste_deploy]

flavor = keystone


重启glance-api glance-registry 服务

service glance-api restart; serviceglance-registry restart


初始化glance数据库

glance-manage db_sync


下载镜像上传镜像

wgethttps://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img

glance p_w_picpath-create --name="CirrOS0.3.1" --disk-format=qcow2 \

--container-format=bare --is-public=true

root@control:~# filecirros-0.3.1-i386-disk.img

cirros-0.3.1-i386-disk.img: QEMU QCOW Image(v2), 41126400 bytes


查看镜像

root@control:~# glance p_w_picpath-list

+--------------------------------------+--------------+-------------+------------------+------------+--------+

| ID | Name | Disk Format | Container Format |Size | Status |

+--------------------------------------+--------------+-------------+------------------+------------+--------+

| fe4210d1-783c-4b7b-9cfd-10f02f7d3c20 |cirros 0.3.1 | qcow2 | bare | 12251136 | active |

| 918dd333-2e9d-4ad2-bcce-9c6be9aec81b |debian | vmdk |bare | 464421376 | active |

| 4dd939cc-54ce-4af0-a170-3d6b778e651f |ubuntu-13.04 | qcow2 | bare | 233504768 | active |

| 43c2bb24-2c4f-4b53-a2da-6ac5fa525dbd |win2003sp2 | qcow2 | bare | 1822621696 | active |

+--------------------------------------+--------------+-------------+------------------+------------+--------+


注:关于镜像,可以自己在网上下载,也可以自己制作,还可以从其他虚拟化平台导入:比如vSphereovf模板,附件有制作windows镜像文档及导入vmware ovf模板文档。


2.8. Quantum

安装quantum-server


apt-get install -y quantum-server

编辑OVS插件配置文件/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini


#Under the database section

[DATABASE]

sql_connection =mysql://quantum:[email protected]/quantum


#Under the OVS section

[OVS]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

enable_tunneling = True


#Firewall driver for realizing quantumsecurity group function

[SECURITYGROUP]

firewall_driver =quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


编辑/etc/quantum/api-paste.ini


[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = quantum

admin_password = 123456


更新 /etc/quantum/quantum.conf:


[keystone_authtoken]

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = quantum

admin_password = 123456

signing_dir =/var/lib/quantum/keystone-signing


重启quantum 服务

service quantum-server restart



2.9. Nova


安装nova相关软件包

apt-get install -y nova-api nova-cert novncnova-consoleauth nova-scheduler nova-novncproxy nova-doc nova-conductor


修改 /etc/nova/api-paste.ini

[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 123456

signing_dirname =/tmp/keystone-signing-nova

# Workaround for https://bugs.launchpad.net/nova/+bug/1154809

auth_version = v2.0


修改/etc/nova/nova.conf

root@control:~# cat /etc/nova/nova.conf

[DEFAULT]

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/run/lock/nova

verbose=True

api_paste_config=/etc/nova/api-paste.ini

compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler

rabbit_host=10.10.10.51

nova_url=http://10.10.10.51:8774/v1.1/

sql_connection=mysql://nova:[email protected]/nova

root_helper=sudo nova-rootwrap/etc/nova/rootwrap.conf

# Auth

use_deprecated_auth=false

auth_strategy=keystone


# Imaging service

glance_api_servers=10.10.10.51:9292

p_w_picpath_service=nova.p_w_picpath.glance.GlanceImageService


# Vnc configuration

novnc_enabled=true

novncproxy_base_url=http://172.16.10.200:6080/vnc_auto.html

novncproxy_port=6080

vncserver_proxyclient_address=10.10.10.51

vncserver_listen=0.0.0.0


# Network settings

network_api_class=nova.network.quantumv2.api.API

quantum_url=http://10.10.10.51:9696

quantum_auth_strategy=keystone

quantum_admin_tenant_name=service

quantum_admin_username=quantum

quantum_admin_password=123456

quantum_admin_auth_url=http://10.10.10.51:35357/v2.0

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver

#If you want Quantum + Nova Security groups

firewall_driver=nova.virt.firewall.NoopFirewallDriver

security_group_api=quantum

#If you want Nova Security groups only,comment the two lines above and uncomment line -1-.

#-1-firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver


#Metadata

service_quantum_metadata_proxy = True

quantum_metadata_proxy_shared_secret =helloOpenStack


# Compute #

compute_driver=libvirt.LibvirtDriver


# Cinder #

volume_api_class=nova.volume.cinder.API

volume_driver=nova.volume.driver.ISCSIDriver

enabled_apis=ec2,osapi_compute,metadata

osapi_volume_listen_port=5900

volume_group = cinder-volumes

volume_name_template = volume-%s

iscsi_helper=tgtadm

#add or volum can not attach

iscsi_ip_address=10.10.10.51


初始化nova数据库

nova-manage db sync


重启nova相关服务

cd /etc/init.d/; for i in $( ls nova-* );do sudo service $i restart; done


检查nova相关服务启动情况

root@control:~# nova-manage service list

Binary Host Zone Status State Updated_At

nova-cert control internal enabled :-) 2013-10-28 09:56:13

nova-conductor control internal enabled :-) 2013-10-28 09:56:11

nova-consoleauth control internal enabled :-) 2013-10-28 09:56:11

nova-scheduler control internal enabled :-) 2013-10-28 09:56:13

nova-console control internal enabled :-) 2013-10-28 09:56:11



2.10. Cinder


安装cinder相关软件包

apt-get install -y cinder-apicinder-scheduler cinder-volume iscsitarget open-iscsi iscsitarget-dkms


配置iscsi服务

sed -i 's/false/true/g'/etc/default/iscsitarget


重启服务

service iscsitarget start

service open-iscsi start


配置/etc/cinder/api-paste.ini


[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

service_protocol = http

service_host = 10.10.10.51

service_port = 5000

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = cinder

admin_password = 123456

signing_dir = /var/lib/cinder


编辑/etc/cinder/cinder.conf

root@control:~# cat /etc/cinder/cinder.conf

[DEFAULT]

rootwrap_config = /etc/cinder/rootwrap.conf

api_paste_confg = /etc/cinder/api-paste.ini

#iscsi_helper = ietadm我这里使用的是默认的tgt服务

iscis_helper = tgtadm

volume_name_template = volume-%s

volume_group = cinder-volumes

verbose = True

auth_strategy = keystone

state_path = /var/lib/cinder

lock_path = /var/lock/cinder

volumes_dir = /var/lib/cinder/volumes

sql_connection =mysql://cinder:[email protected]/cinder

#RPC

rabbit_host = 10.10.10.51

rabbit_password = guest

issci_ip_prefix = 10.10.10

rpc_backend = cinder.openstack.common.rpc.impl_kombu

iscsi_ip_address = 10.10.10.51

#API

osapi_volume_extension =cinder.api.contrib.standard_extensions


初始化cinder数据库

cinder-manage db sync


创建卷组名字叫 cinder-volumes,虚拟机默认添加了2块硬盘


#pvcreate /dev/sdb

#vgcreate cinder-volumes /dev/sdb


重启cinder服务

cd /etc/init.d/; for i in $( ls cinder-* );do sudo service $i restart; done


确认cinder服务都在运行

cd /etc/init.d/; for i in $( ls cinder-* );do sudo service $i status; done



2.11. Horizon


安装horizon

apt-get install -y openstack-dashboardmemcached


如有必要可以删除ubuntu主题

dpkg --purgeopenstack-dashboard-ubuntu-theme


重启Apachememcached

service apache2 restart; service memcachedrestart


登陆OpenStack Dashboard

http://172.16.10.200/horizon 登陆用户名密码分别是admin123456



3. Network node

3.1. 准备工作

安装源

apt-getinstall -y ubuntu-cloud-keyring

echodeb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzlymain >> /etc/apt/sources.list.d/grizzly.list


更新系统

apt-getupdate -y

apt-getupgrade -y

apt-getdist-upgrade -y


安装ntp服务


apt-getinstall -y ntp


配置ntp使其同步控制节点时间

#Commentthe ubuntu NTP servers

sed -i's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf


#Setthe network node to follow up your conroller node

sed -i's/server ntp.ubuntu.com/server 10.10.10.51/g' /etc/ntp.conf


重启ntp服务

servicentp restart


安装其他软件

apt-getinstall -y vlan bridge-utils


开启ip转发

sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf


#sysctl-p 使其立即生效


3.2.网络

3个网卡初始配置


# OpenStack management

auto eth0

iface eth0 inet static

address 10.10.10.52

netmask 255.255.255.0


# VM Configuration

auto eth1

iface eth1 inet static

address 10.20.20.52

netmask 255.255.255.0


# VM internet Access

auto eth2

iface eth2 inet static

address 172.16.10.201

netmask 255.255.255.0



3.4.OpenVSwitch 这里没有按照github分开做两步,主要是他那个没有做上网设置,他需要把quantum软件包安装完毕后才做第二部,我这里设置了eth2br-ex ip保证都能上网,所以可以一块做。


安装 openVSwitch:注意这里的三个软件都要安装,openstack除了用ovs还要用到系统的brcompat

apt-get install openvswitch-switchopenvswitch-brcompat openvswitch-datapath-dkms


设置 ovs-brcompatd 启动:

sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g'/etc/default/openvswitch-switch


启动 openvswitch-switch:

root@network:~# service openvswitch-switch restart

* Killingovs-brcompatd (1327)

* Killingovs-vswitchd (1195)

* Killingovsdb-server (1185)

* Startingovsdb-server

*Configuring Open vSwitch system IDs

* Startingovs-vswitchd

2013-10-29T02:45:50Z|00001|brcompatd|WARN|Bridgecompatibility is deprecated and may be removed no earlier than February 2013

* Startingovs-brcompatd

直到 ovs-brcompatdovs-vswitchdovsdb-server等服务都启动


并检查brcompat模块

# lsmod | grep brcompat

brcompat 13512 0

openvswitch 84038 7 brcompat


如果还是启动不了 brcompat,执行下面命令:

/etc/init.d/openvswitch-switch force-reload-kmod

再不行重启服务器,ubuntu13.0464位)一般安装上面3个软件都可以成功启动,不需其他额外操作。


创建网桥

ovs-vsctl add-br br-int # br-int 用于 vm 整合

ovs-vsctl add-br br-ex # br-ex 用于从互联网上访问 vm

ovs-vsctl add-port br-ex eth2 # br-ex 桥接到 eth2


做完上面操作后,eth2 这个网卡是没有工作的,需修改网卡配置文件

最后网卡的配置情况:

root@network:~# cat /etc/network/interfaces

# This file describes the networkinterfaces available on your system

# and how to activate them. For moreinformation, see interfaces(5).


# The loopback network interface

auto lo

iface lo inet loopback


# The primary network interface

#auto eth0

#iface eth0 inet dhcp

# OpenStack management

auto eth0

iface eth0 inet static

address 10.10.10.52

netmask 255.255.255.0


# VM Configuration

auto eth1

iface eth1 inet static

address 10.20.20.52

netmask 255.255.255.0


# VM internet Access

auto eth2

iface eth2 inet manual

up ifconfig $IFACE 0.0.0.0 up

down ifconfig $IFACE down


auto br-ex

iface br-ex inet static

address 172.16.10.201

netmask 255.255.255.0

gateway 172.16.10.254

dns-nameservers 8.8.8.8


然后重启服务器或者网络,上网和内网连接都没有问题才进行下一步操作

查看桥接的网络

ovs-vsctl list-br

ovs-vsctl show



3.5. Quantum

安装Quantum openvswitch agent(二层交换), l3 agent(3层路由) and dhcpagent

apt-get -y installquantum-plugin-openvswitch-agent quantum-dhcp-agent quantum-l3-agentquantum-metadata-agent


编辑/etc/quantum/api-paste.ini


[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = quantum

admin_password = 123456


编辑OVS plugin 配置/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini


#Under the database section

[DATABASE]

sql_connection =mysql://quantum:[email protected]/quantum


#Under the OVS section

[OVS]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip = 10.20.20.52

enable_tunneling = True


#Firewall driver for realizing quantumsecurity group function

[SECURITYGROUP]

firewall_driver =quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


设置/etc/quantum/quantum.conf

root@network:~# cat/etc/quantum/quantum.conf |grep -v ^#|grep -v ^$

[DEFAULT]

lock_path = $state_path/lock

bind_host = 0.0.0.0

bind_port = 9696

core_plugin =quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

api_paste_config =/etc/quantum/api-paste.ini

control_exchange = quantum

fake_rabbit = False

rabbit_host = 10.10.10.51

rabbit_password = guest

rabbit_port = 5672

rabbit_userid = guest

notification_driver =quantum.openstack.common.notifier.rpc_notifier

default_notification_level = INFO

notification_topics = notifications

[QUOTAS]

[DEFAULT_SERVICETYPE]

[AGENT]

root_helper = sudo quantum-rootwrap/etc/quantum/rootwrap.conf

[keystone_authtoken]

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = quantum

admin_password = 123456

signing_dir =/var/lib/quantum/keystone-signing



更新/etc/quantum/metadata_agent.ini(和控制节点通信)

root@network:~# cat/etc/quantum/metadata_agent.ini |grep -v^# |grep -v ^$

[DEFAULT]

auth_url = http://10.10.10.51:35357/v2.0

auth_region = RegionOne

admin_tenant_name = service

admin_user = quantum

admin_password = 123456

nova_metadata_ip = 10.10.10.51

nova_metadata_port = 8775

metadata_proxy_shared_secret =helloOpenStack


/etc/quantum/l3_agent.ini/etc/quantum/dhcp_agent.ini配置文件没有改


设置sudo权限

root@network:~# cat/etc/sudoers.d/quantum_sudoers

#Defaults:quantum !requiretty

quantum ALL=NOPASSWD: ALL

#quantum ALL = (root) NOPASSWD:/usr/bin/quantum-rootwrap


重启所有quantum服务

cd /etc/init.d/; for i in $( ls quantum-*); do sudo service $i restart; done


查看所有服务状态,有没启动的请查看quantum下的log日志

cd /etc/init.d/; for i in $( ls quantum-*); do sudo service $i status; done



4. Compute Node


4.1. 准备节点

安装源

#apt-get install -y ubuntu-cloud-keyring

#apt-get install -y gplhost-archive-keyring

root@c03:/var/log/nova# cat/etc/apt/sources.list.d/cloud-archive.list

debhttp://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main

root@c03:/var/log/nova# cat/etc/apt/sources.list.d/grizzly.list

deb http://archive.gplhost.com/debiangrizzly main

deb http://archive.gplhost.com/debiangrizzly-backports main


更新系统

apt-get update -y

apt-get upgrade -y

apt-get dist-upgrade -y


安装ntp服务


apt-get install -y ntp


配置ntp使其同步控制节点时间

#Comment the ubuntu NTP servers

sed -i 's/server0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf


#Set the network node to follow up yourconroller node

sed -i 's/server ntp.ubuntu.com/server10.10.10.51/g' /etc/ntp.conf


重启ntp服务

service ntp restart


安装其他软件

apt-get install -y vlan bridge-utils


开启ip转发

sed -i's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf


#sysctl -p 使其立即生效


4.2.网络配置

其中eth2是为了下载软件包的用完后可以删除

root@c03:~# cat /etc/network/interfaces

# This file describes the networkinterfaces available on your system

# and how to activate them. For moreinformation, see interfaces(5).


# The loopback network interface

auto lo

iface lo inet loopback


# The primary network interface

auto eth2

iface eth2 inet dhcp

auto eth0

iface eth0 inet static

address 10.10.10.55

netmask 255.255.255.0

auto eth1

iface eth1 inet static

address 10.20.20.55

netmask 255.255.255.0


4.3 安装nova计算包

# apt-get install nova-compute-qemu (我这里是安装在虚拟机中,不支持kvm)

注意:

nova-compute-kvm requires that your CPU supportshardware-assisted

virtualization (HVM) such as Intel VT-x orAMD-V. If your CPU does not

support this, or if you are already runningin a virtualized environment, you

can instead use the nova-compute-qemupackage. This package provides

software-based virtualization.


/etc/nova/api-paste.ini 中修改 autotoken 验证部分

[filter:authtoken]

paste.filter_factory =keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = nova

admin_password = 123456

signing_dir = /tmp/keystone-signing-nova

auth_version = v2.0


修改 /etc/nova/nova.conf

root@c03:~# cat /etc/nova/nova.conf

[DEFAULT]

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

iscsi_helper=tgtadm

iscsi_ip_address=10.10.10.51

libvirt_use_virtio_for_bridges=True

connection_type=libvirt

root_helper=sudo nova-rootwrap/etc/nova/rootwrap.conf

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

volumes_path=/var/lib/nova/volumes

enabled_apis=ec2,osapi_compute,metadata

# General

verbose=True

my_ip=10.10.10.55

rabbit_host = 10.10.10.51

rabbit_password = guest


auth_strategy=keystone

ec2_host=10.10.10.51

ec2_url=http://10.10.10.51:8773/services/Cloud


# Networking

libvirt_use_virtio_for_bridges=True

network_api_class=nova.network.quantumv2.api.API

quantum_url=http://10.10.10.51:9696

quantum_auth_strategy=keystone

quantum_admin_tenant_name=service

quantum_admin_username=quantum

quantum_admin_password=123456

quantum_admin_auth_url=http://10.10.10.51:35357/v2.0


# Security Groups

firewall_driver=nova.virt.firewall.NoopFirewallDriver

security_group_api=quantum


# Compute

compute_driver=libvirt.LibvirtDriver

connection_type=libvirt


# Cinder

volume_api_class=nova.volume.cinder.API

volume_driver=nova.volume.driver.ISCSIDriver

enabled_apis=ec2,osapi_compute,metadata

osapi_volume_listen_port=5900

cinder_catalog_info=volume:cinder:internalURL

iscsi_helper=tgtadm

volume_name_template = volume-%s

volume_group = cinder-volumes


# Glance

glance_api_servers=10.10.10.51:9292

p_w_picpath_service=nova.p_w_picpath.glance.GlanceImageService


# novnc

novnc_enabled=true

novncproxy_base_url=http://172.16.10.200:6080/vnc_auto.html

novncproxy_port=6080

vncserver_proxyclient_address=10.10.10.55

vncserver_listen=0.0.0.0


查看libvir类型

root@c03:~# cat /etc/nova/nova-compute.conf

[DEFAULT]

libvirt_type=qemu

compute_driver=libvirt.LibvirtDriver


删除默认虚拟桥,不删也不影响

virsh net-destroy default

virsh net-undefine default



启动 nova-compute 服务

service nova-compute restart

检查 nova 相关服务笑脸

root@control:~# nova-manage service list|grep -v c01 |grep -v c02

Binary Host Zone Status State Updated_At

nova-cert control internal enabled :-) 2013-10-29 04:29:43

nova-conductor control internal enabled :-) 2013-10-29 04:29:38

nova-consoleauth control internal enabled :-) 2013-10-29 04:29:42

nova-scheduler control internal enabled :-) 2013-10-29 04:29:43

nova-compute c03 nova enabled :-) 2013-10-29 04:29:36

nova-console control internal enabled :-) 2013-10-29 04:29:43



4.4. OpenVSwitch

安装 openVSwitch:注意这里的三个软件都要安装,openstack除了用ovs还要用到系统的brcompat

apt-get install openvswitch-switchopenvswitch-brcompat openvswitch-datapath-dkms


设置 ovs-brcompatd 启动:

sed -i 's/# BRCOMPAT=no/BRCOMPAT=yes/g'/etc/default/openvswitch-switch


启动 openvswitch-switch:

# service openvswitch-switch restart

*Killing ovs-brcompatd (1327)

*Killing ovs-vswitchd (1195)

*Killing ovsdb-server (1185)

*Starting ovsdb-server

*Configuring Open vSwitch system IDs

*Starting ovs-vswitchd

2013-10-29T02:45:50Z|00001|brcompatd|WARN|Bridgecompatibility is deprecated and may be removed no earlier than February 2013

*Starting ovs-brcompatd

直到 ovs-brcompatdovs-vswitchdovsdb-server等服务都启动


并检查brcompat模块

# lsmod | grep brcompat

brcompat 13512 0

openvswitch 84038 7 brcompat


如果还是启动不了 brcompat,执行下面命令:

/etc/init.d/openvswitch-switchforce-reload-kmod

再不行重启服务器,ubuntu13.0464位)一般安装上面3个软件都可以成功启动,不需其他额外操作。


创建br-int 网桥

ovs-vsctl add-br br-int




4.5. Quantum

安装 Quantum openvswitch agent:

apt-get install quantum-plugin-openvswitch-agent

编辑 OVS 插件配置文件 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini:

root@c03:~# cat/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini |grep -v ^# |grep -v ^$

[DATABASE]

sql_connection =mysql://quantum:[email protected]/quantum

reconnect_interval = 2

[OVS]

tenant_network_type = gre

tunnel_id_ranges = 1:1000

local_ip = 10.20.20.55

enable_tunneling = True

[AGENT]

polling_interval = 2

[SECURITYGROUP]

firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


编辑 /etc/quantum/quantum.conf

root@c03:~# cat/etc/quantum/quantum.conf|grep -v ^# |grep -v ^$

[DEFAULT]

verbose = True

lock_path = $state_path/lock

bind_host = 0.0.0.0

bind_port = 9696

core_plugin =quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2

api_paste_config =/etc/quantum/api-paste.ini

control_exchange = quantum

fake_rabbit = False

rabbit_host = 10.10.10.51

rabbit_password = guest

notification_driver = quantum.openstack.common.notifier.rpc_notifier

default_notification_level = INFO

notification_topics = notifications

[QUOTAS]

[DEFAULT_SERVICETYPE]

[AGENT]

root_helper = sudo quantum-rootwrap/etc/quantum/rootwrap.conf

[keystone_authtoken]

auth_host = 10.10.10.51

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = quantum

admin_password = 123456

signing_dir =/var/lib/quantum/keystone-signing

启动服务:

service quantum-plugin-openvswitch-agentrestart


安装其他软件:

如果instance要想成功attach cinder所建立的volume,还需要安装一些铺助软件,这个在官网的排错中有提到,一般地方找不到(由于系统是最小化安装了,mysql-cient都没安上)

libsysfs2_2.1.0+repack-2_amd64.deb multipath-tools_0.4.9-3ubuntu7_amd64.deb sg3-utils_1.33-1build1_amd64.deb

mysql-client-core-5.5_5.5.32-0ubuntu0.13.04.1_amd64.deb sysfsutils_2.1.0+repack-2_amd64.deb


5.开始创建 vm


这两点可以直接参考官网文档,但是都是命令行模式,看似高明,但确实没有登陆到dashboard控制台简单明了

5.1创建 quantum 网络

基本步骤归纳如下:

建租户(每个租户可以有自己的网络和虚拟机,但是共用外部挖网络)

建外网名称

建外网子网(内设有floatingip范围,和内部虚拟机的内网ip做映射用,这样外部才能访问虚拟机)

建内网名称

建内网子网(分配给虚拟机instance

建路由器

给路由器设置网关(外网),添加网段(内网),这样外网和内网就可以通过路由器连接起来。


注意这里有个检测网络环境是否完好的命令:

root@control:~# quantum agent-list

+--------------------------------------+--------------------+---------+-------+----------------+

| id | agent_type | host | alive | admin_state_up |

+--------------------------------------+--------------------+---------+-------+----------------+

| 03d00de0-d78e-47ae-8b64-10971e140b45 |Open vSwitch agent | network | :-) |True |

| a7c840b3-de73-4ee4-8e1f-acb4ff9b2046 | L3agent | network | :-) | True |

| b6b66ba7-e733-42a4-bdf0-796787d48955 |DHCP agent | network | :-) | True |

| c3ce6c66-8a9a-4786-8d92-5325df54e0f0 |Open vSwitch agent | c03 | :-) | True |

+--------------------------------------+--------------------+---------+-------+----------------+


建好的网络拓扑图如下:

Admin租户的

admin租户权限比较大能看到所有的网络拓扑,但是其他租户里面的虚拟机看不到


Demo 租户的

该租户有两个网段,192.168.1.0/24 192.168.2.0/24

两个网段可以通过外部ip连接通信,从图上的网络图就可以看出来,同一个内部网段可以相互通信不用出相应路由。

5.2虚拟机


Launch instance 取名字,选择镜像,选择网络,启动即可



VNC访问,注意官网下载的镜像用的用密钥才能ssh登陆,是sshd_config里配置了不让用户名密码登陆,你可以下载key,然后ssh登陆,也可以在vnc里改sshd配置,更可以自己制作镜像。


安装上面其他软件后终于attach上了



这里的镜像debian是从esxi5生成的ovf模板导入的,windows2003是自己用kvm虚拟机做的,其他的事网上下载的。


下面是倒入镜像支持的格式




附件:windows镜像制作

Openstack 制作windows2003镜像


准备

下载virtio-win-1.1.16.vfd

virtio-win-0.1-65.iso

windows_sp2.iso


开始

qemu-img create -f raw windows2003.img 8G


sudo qemu-kvm -m 512-no-reboot -boot order=d -drive file=windows2003.img,if=virtio,boot=off -drive file=WIN2003_SP2.iso,media=cdrom,boot=on -fda virtio-win-1.1.16.vfd -bootorder=d,menu=on -usbdevice tablet -nographic -vnc :1


之后迅速用vncviewer 127.0.0.1:5901 接入查看,按F12 ,跳到菜单选项,否则会自动进入硬盘启动模式,如果不慎进入,请killkvm进程,再重启kvm尝试迅速按F12


默认光盘启动


F8 ,进入分区,格式化后重启,进程要重开,



sudoqemu-kvm -m 512 -no-reboot -boot order=d -drivefile=windows2003.img,if=virtio,boot=off -drivefile=WIN2003_SP2.iso,media=cdrom,boot=on -fda virtio-win-1.1.16.vfd -bootorder=d,menu=on -usbdevice tablet -nographic -vnc :1

F12选择硬盘启动

重新开始安装。



安装完成后,关闭虚拟机


重启虚拟机镜像,加载virtio驱动

sudo qemu-kvm -m512 -drive file=windows2003.img -cdromvirtio-win-0.1-65.iso -net nic,model=virtio -net user -boot order=c -usbdevicetablet -nographic -vnc :1

安装完virtio 驱动之后,关机,安装管理工具

sudo qemu-kvm -m512 -drive file=windows2003r.img -cdromWIN2003_SP2.iso -net nic,model=virtio -net user -boot order=c -usbdevice tablet-nographic -vnc :1



上传

glance addname="win2003X64" is_public=true container_format=ovf disk_format=raw < windows2003.img