软件:
VMware® Workstation 9.0
ubuntu-12.04.1-server-amd64.iso
参考网址:
http://docs.openstack.org/essex/openstack-compute/starter/content/Server1-d1e537.html
一.创建虚拟机
注意:需要2个虚拟网卡,2块硬盘,一个30G,一个10G。
用于创建nova-volume和swift。
二.安装ubuntu-server
注意:选择手动分区,对30G的硬盘进行以下分区,10G的硬盘暂时不用操作
1.创建根分区,15GB
2.创建交换分区,2GB
3.剩余空间,创建逻辑分区,在文件系统中选择最后一项不使用,留着物理卷给nova-volume.
三.开始OpenStack安装(以下操作使用root)
由于脚本比较长,所以没列出来,请http://yuky1327.iteye.com/blog/1696604在下载附件的脚本,辅助执行。
1.开启并设置root密码
sudo passwd root
2. Network Configuration
Edit the /etc/network/interfaces file so as to looks like this:
注意:请在虚拟机里面操作静态地址,不要SSH登录到机器上修改,否则可能会ping不通外网
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.1.50
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 202.96.128.166
auto eth1
iface eth1 inet static
address 10.0.1.1
netmask 255.255.255.0
network 10.0.1.0
broadcast 10.0.1.255
Restart the network now
sudo /etc/init.d/networking restart
3.Install Base OS & bridge-utils
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bridge-utils
4.NTP Server
sudo apt-get install ntp
Open the file /etc/ntp.conf and add the following lines to make sure that the time on the server stays in sync with an external server. If the Internet connectivity is down, the NTP server uses its own hardware clock as the fallback.
server ntp.ubuntu.com
server 127.127.1.0
fudge 127.127.1.0 stratum 10
Restart the NTP server
sudo service ntp restart
5.Install mysql-server and python-mysqldb package
Create the root password for mysql. The password used in this guide is "mygreatsecret"
sudo apt-get install mysql-server python-mysqldb
Change the bind address from 127.0.0.1 to 0.0.0.0 in /etc/mysql/my.cnf. It should be identical to this:
bind-address = 0.0.0.0
Restart MySQL server to ensure that it starts listening on all interfaces.
sudo restart mysql
Create MySQL databases to be used with nova, glance and keystone.
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE nova;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER novadbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON nova.* TO 'novadbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'novadbadmin'@'%' = PASSWORD('novasecret');"
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE glance;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER glancedbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON glance.* TO 'glancedbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'glancedbadmin'@'%' = PASSWORD('glancesecret');"
sudo mysql -uroot -pmygreatsecret -e 'CREATE DATABASE keystone;'
sudo mysql -uroot -pmygreatsecret -e 'CREATE USER keystonedbadmin;'
sudo mysql -uroot -pmygreatsecret -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystonedbadmin'@'%';"
sudo mysql -uroot -pmygreatsecret -e "SET PASSWORD FOR 'keystonedbadmin'@'%' = PASSWORD('keystonesecret');"
6.Install Keystone
sudo apt-get install keystone python-keystone python-keystoneclient
Open /etc/keystone/keystone.conf and change the line
admin_token = ADMIN
改为
admin_token = admin
Since MySQL database is used to store keystone configuration, replace the following line in /etc/keystone/keystone.conf
connection = sqlite:////var/lib/keystone/keystone.db
改为
connection = mysql://keystonedbadmin:[email protected]/keystone
Restart Keystone:
sudo service keystone restart
Run the following command to synchronise the database:
sudo keystone-manage db_sync
add these variables to ~/.bashrc
export SERVICE_ENDPOINT="http://localhost:35357/v2.0"
export SERVICE_TOKEN=admin
source .bashrc
Creating Tenants,Creating Users,Creating Roles,Listing Tenants, Users and Roles,Adding Roles to Users in Tenants,Creating Services,Creating Endpoints
其中会要求输入邮箱地址和本机IP地址
./create_keystone_data.sh
7.Install glance
sudo apt-get install glance glance-api glance-client glance-common glance-registry python-glance
Glance uses SQLite by default. MySQL and PostgreSQL can also be configured to work with Glance.
修改/etc/glance/glance-api-paste.ini 和 /etc/glance/glance-registry-paste.ini
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
改为
admin_tenant_name = service
admin_user = glance
admin_password = glance
Open the file /etc/glance/glance-registry.conf and edit the line which contains the option "sql_connection =" to this:
sql_connection = mysql://glancedbadmin:[email protected]/glance
....
#末尾追加
[paste_deploy]
flavor = keystone
Open /etc/glance/glance-api.conf and add the following lines at the end of the document.
[paste_deploy]
flavor = keystone
Create glance schema in the MySQL database.:
sudo glance-manage version_control 0
sudo glance-manage db_sync
Restart glance-api and glance-registry after making the above changes.
sudo restart glance-api
sudo restart glance-registry
add these variables to ~/.bashrc
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_AUTH_URL="http://localhost:5000/v2.0/"
source .bashrc
To test if glance is setup correectly execute the following command.
glance index
成功是不会显示任何信息,不成功则会显示错误信息.
8.Install nova
sudo apt-get install nova-api nova-cert nova-compute nova-compute-kvm nova-doc nova-network nova-objectstore nova-scheduler nova-volume rabbitmq-server novnc nova-consoleauth
Run edit_nova_conf.sh to edit the /etc/nova/nova.conf file
./edit_nova_conf.sh
输入mysql的地址
输入本机IP
输入浮动IP的开始,默认192.168.1.225
Create a Physical Volume.
sudo pvcreate /dev/sda5
Create a Volume Group named nova-volumes.
sudo vgcreate nova-volumes /dev/sda5
Change the ownership of the /etc/nova folder and permissions for /etc/nova/nova.conf:
sudo chown -R nova:nova /etc/nova
sudo chmod 644 /etc/nova/nova.conf
Open /etc/nova/api-paste.ini and at the end of the file, edit the following lines:
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
改为
admin_tenant_name = service
admin_user = nova
admin_password = nova
设置ipv4转发,否则外面能连接虚拟机,虚拟机访问不了外面
sysctl -w net.ipv4.ip_forward=1
Create nova schema in the MySQL database.
sudo nova-manage db sync
创建网络
nova-manage network create private --fixed_range_v4=10.0.1.1/27 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32
设定floating IP,与输入的floating_range值一致
nova-manage floating create --ip_range=192.168.1.225/27
Restart nova services.
sudo restart libvirt-bin; sudo restart nova-network; sudo restart nova-compute; sudo restart nova-api; sudo restart nova-objectstore; sudo restart nova-scheduler; sudo restart nova-volume; sudo restart nova-consoleauth;
To test if nova is setup correctly run the following command.
sudo nova-manage service list
Binary Host Zone Status State Updated_At
nova-network server1 nova enabled :-) 2012-04-20 08:58:43
nova-scheduler server1 nova enabled :-) 2012-04-20 08:58:44
nova-volume server1 nova enabled :-) 2012-04-20 08:58:44
nova-compute server1 nova enabled :-) 2012-04-20 08:58:45
nova-cert server1 nova enabled :-) 2012-04-20 08:58:43
7.1 Install OpenStack Dashboard
sudo apt-get install openstack-dashboard
Restart apache with the following command:
sudo service apache2 restart
打开浏览器,输入http://192.168.1.200,输入admin@admin登录。
7.2 Install Swift
sudo apt-get install swift swift-proxy swift-account swift-container swift-object
sudo apt-get install xfsprogs curl python-pastedeploy
Swift Storage Backends For Partition as a storage device
If you had set aside a partition for Swift during the installation of the OS, you can use it directly. If you have an unused/unpartitioned physical partition (e.g. /dev/sdb5), you have to format it to xfs filesystem using parted or fdisk and use it as the backend. You need to specify the mount point in /etc/fstab.
CAUTION: Replace /dev/sdb to your appropriate device. I'm assuming that there is an unused/un-formatted partition section in /dev/sdb
root@begon:/dev# sudo fdisk /dev/sdb
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): e
Partition number (1-4, default 1): 3
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Command (m for help): n
Partition type:
p primary (0 primary, 1 extended, 3 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (4096-20971519, default 4096):
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-20971519, default 20971519):
Using default value 20971519
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
查看是否创建成功
root@bogon:/dev# fdisk /dev/sdb
Command (m for help): p
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
107 heads, 17 sectors/track, 11529 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x937847e1
Device Boot Start End Blocks Id System
/dev/sdb1 2048 20971519 10484736 5 Extended
/dev/sdb5 4096 20971519 10483712 83 Linux
This would have created a partition (something like /dev/sdb5) that we can now format to XFS filesystem. Do 'sudo fdisk -l' in the terminal to view and verify the partion table. Find the partition Make sure that the one that you want to use is listed there. This would work only if you have xfsprogs installed.
sudo mkfs.xfs -i size=1024 /dev/sdb5
Create a directory /mnt/swift_backend that can be used as a mount point to the partion tha we created.
sudo mkdir /mnt/swift_backend
以下添加到 /etc/fstab
/dev/sdb5 /mnt/swift_backend xfs noatime,nodiratime,nobarrier,logbufs=8 0 0
Now before mounting the backend that will be used, create some nodes to be used as storage devices and set ownership to 'swift' user and group.
sudo mount /mnt/swift_backend
pushd /mnt/swift_backend
sudo mkdir node1 node2 node3 node4
popd
sudo chown swift.swift /mnt/swift_backend/*
for i in {1..4}; do sudo ln -s /mnt/swift_backend/node$i /srv/node$i; done;
sudo mkdir -p /etc/swift/account-server /etc/swift/container-server /etc/swift/object-server /srv/node1/device /srv/node2/device /srv/node3/device /srv/node4/device
sudo mkdir /run/swift
sudo chown -L -R swift.swift /etc/swift /srv/node[1-4]/ /run/swift
把下面添加到/etc/rc.local ,在"exit 0"前;
sudo mkdir /run/swift
sudo chown swift.swift /run/swift
打开/etc/default/rsync 设置 RSYNC_ENABLE=true
RSYNC_ENABLE=true
创建并写入以下内容到/etc/rsyncd.conf
# General stuff
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /run/rsyncd.pid
address = 127.0.0.1
# Account Server replication settings
[account6012]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/account6012.lock
[account6022]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/account6022.lock
[account6032]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/account6032.lock
[account6042]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/account6042.lock
# Container server replication settings
[container6011]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/container6011.lock
[container6021]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/container6021.lock
[container6031]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/container6031.lock
[container6041]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/container6041.lock
# Object Server replication settings
[object6010]
max connections = 25
path = /srv/node1/
read only = false
lock file = /run/lock/object6010.lock
[object6020]
max connections = 25
path = /srv/node2/
read only = false
lock file = /run/lock/object6020.lock
[object6030]
max connections = 25
path = /srv/node3/
read only = false
lock file = /run/lock/object6030.lock
[object6040]
max connections = 25
path = /srv/node4/
read only = false
lock file = /run/lock/object6040.lock
Restart rsync.
sudo service rsync restart
Configure Swift Components
运行以下命令获取一个随机码
root@bogon:/srv# od -t x8 -N 8 -A n < /dev/random
7736e3116c693239
创建 /etc/swift/swift.conf and 把随机码写入:
[swift-hash]
# random unique string that can never change (DO NOT LOSE). I'm using 7736e3116c693239.
# od -t x8 -N 8 -A n < /dev/random
# The above command can be used to generate random a string.
swift_hash_path_suffix = 7736e3116c693239
把以下内容写入到/etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift
[pipeline:main]
# Order of execution of modules defined below
pipeline = catch_errors healthcheck cache authtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
set log_name = swift-proxy
set log_facility = LOG_LOCAL0
set log_level = INFO
set access_log_name = swift-proxy
set access_log_facility = SYSLOG
set access_log_level = INFO
set log_headers = True
account_autocreate = True
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:cache]
use = egg:swift#memcache
set log_name = cache
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_protocol = http
auth_host = 127.0.0.1
auth_port = 35357
auth_token = admin
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
admin_token = admin
admin_tenant_name = service
admin_user = swift
admin_password = swift
delay_auth_decision = 0
[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
is_admin = true
Configure Swift Account Server,Swift Container Server,Swift Object Server
./swift_account_server.sh
./swift_container_server.sh
./swift_object_server.sh
vi /etc/swift/container-server.conf 在末尾添加以下
[container-sync]
Configure Swift Rings
pushd /etc/swift
sudo swift-ring-builder object.builder create 18 3 1
sudo swift-ring-builder container.builder create 18 3 1
sudo swift-ring-builder account.builder create 18 3 1
sudo swift-ring-builder object.builder add z1-127.0.0.1:6010/device 1
sudo swift-ring-builder object.builder add z2-127.0.0.1:6020/device 1
sudo swift-ring-builder object.builder add z3-127.0.0.1:6030/device 1
sudo swift-ring-builder object.builder add z4-127.0.0.1:6040/device 1
sudo swift-ring-builder object.builder rebalance
sudo swift-ring-builder container.builder add z1-127.0.0.1:6011/device 1
sudo swift-ring-builder container.builder add z2-127.0.0.1:6021/device 1
sudo swift-ring-builder container.builder add z3-127.0.0.1:6031/device 1
sudo swift-ring-builder container.builder add z4-127.0.0.1:6041/device 1
sudo swift-ring-builder container.builder rebalance
sudo swift-ring-builder account.builder add z1-127.0.0.1:6012/device 1
sudo swift-ring-builder account.builder add z2-127.0.0.1:6022/device 1
sudo swift-ring-builder account.builder add z3-127.0.0.1:6032/device 1
sudo swift-ring-builder account.builder add z4-127.0.0.1:6042/device 1
sudo swift-ring-builder account.builder rebalance
To start swift and the REST API, run the following commands.
sudo swift-init main start
sudo swift-init rest start
Testing Swift
sudo chown -R swift.swift /etc/swift
Then run the following command and verify if you get the appropriate account information. The number of containers and objects stored within are displayed as well.
root@server1:~# swift -v -V 2.0 -A http://127.0.0.1:5000/v2.0/ -U service:swift -K swift stat
StorageURL: http://192.168.1.200:8080/v1/AUTH_4b0de95572044eb49345930225d81752
Auth Token: e6955ec2e6ca4059aba6bafc6c0d6473
Account: AUTH_4b0de95572044eb49345930225d81752
Containers: 0
Objects: 0
Bytes: 0
Accept-Ranges: bytes
X-Trans-Id: tx051c25a362534266a4583f49fa44558d
到这里已经完成安装OpenStack了,里面提到的脚本,可以在附件下载.本次操作主要参考官方例子,有几个小地方与官网不一致.
打开http://192.168.1.200,输入admin@admin登录到系统中,可以通过这个平台创建镜像,实例等操作。