控制节点 192.168.198.101
计算机点 192.168.198.102
代理节点 192.168.198.104
存储节点 192.168.198.103
所有机器运行
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y ntp
sudo sed -i 's/server ntp.ubuntu.com/serverntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g'/etc/ntp.conf
sudo service ntp restart
控制节点
apt-get install tgt open-iscsi open-iscsi-utils
fdisk /dev/sda
分一个空的分区出来这里为/dev/sda5
partprobe
pvcreate /dev/sda5
vgcreate nova-volumes /dev/sda5
Keystone
sudo apt-get install keystone
sudo su -
rm /var/lib/keystone/keystone.db
apt-get install python-mysqldb mysql-server (password:mysql)
sed �Ci ‘s/127.0.0.1/0.0.0.0/g’ /etc/mysql/my.cnf
service mysql restart
mysql �Cu root �Cp
mysql> CREATE DATABASE keystone;
mysql> GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystonepassword';
mysql> quit
vim /etc/keystone/keystone.conf
connection = sqlite:////var/lib/keystone/keystone.db改为
connection =
mysql://keystone:[YOUR_KEYSTONE_PASSWORD]@192.168.198.101/keystone
admin_token = admin 默认为ADMIN,可更改为自己想要的
sudo service keystone restart
keystone-manage db_sync
配置keystone
创建租户(admin)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 tenant-create --name admin --description "admin" --enabled true
创建用户(admin)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-create --tenant_id [admin_ID] --name admin --pass admin --enabled true
创建两个角色(admin, memberRole)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 role-create --name admin
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 role-create --name MemberRole
为创建的用户绑定角色
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-role-add --user [admin_ID] --tenant_id [admin_ID] --role [admin_ID]
创建租户(service)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 tenant-create --name service --description "service" --enabled true
创建用户(nova)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-create --tenant_id [service_ID] --name nova --pass nova --enabled true
为创建的用户绑定角色
keystone --token admin --endpoint http://192.168. 198.101:35357/v2.0 user-role-add --user [nova_ID] --tenant_id [Service_ID] --role [admin_ID]
创建用户(glance)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-create --tenant_id [service_ID] --name glance --pass glance --enabled true
为创建的用户绑定角色
keystone --token admin --endpoint http://192.168. 198.101:35357/v2.0 user-role-add --user [glance_ID] --tenant_id [Service_ID] --role [admin_ID]
创建用户(swift)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-create --tenant_id [service_ID] --name swift --pass swift --enabled true
为创建的用户绑定角色
keystone --token admin --endpoint http://192.168. 198.101:35357/v2.0 user-role-add --user [swifte_ID] --tenant_id [Service_ID] --role [admin_ID]
创建用户(ec2)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0 user-create --tenant_id [service_ID] --name ec2 --pass ec2 --enabled true
为创建的用户绑定角色
keystone --token admin --endpoint http://192.168. 198.101:35357/v2.0 user-role-add --user [ec2_ID] --tenant_id [Service_ID] --role [admin_ID]
启用Keystone
为了使Swift与S3 API兼容,需在keystone.conf文件中定义一个新的过滤器并启用它
定义过滤器
[filter:s3_extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory
启用并更新admin_api行
[pipeline:admin_api]
pipeline = token_auth admin_token_auth xml_body json_body debug ec2_extension crud_extension admin_service
更新为
[pipeline:admin_api]
pipeline = token_auth admin_token_auth xml_body json_body debug ec2_extension s3_extension crud_extension admin_service
定义服务
可使用模板文件或后端数据库两种方法来定义
使用后端数据库来下义的话keystone.conf配置文件中应包含以下两行
[catalog]
driver = keystone.catalog.backends.sql.Catalog
创建keystone服务,类型为identity
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=keystone --type=identity --description="Keystone Identity Service"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[keystone_id] --publicurl=http://192.168.198.101:5000/v2.0 --internalurl=http://192.168.198.101:5000/v2.0 --adminurl=http://192.168.198.101:35357/v2.0
创建nova服务,类型为compute
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=nova --type=compute --description="Nova Compute Service"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[nova_ID] --publicurl='http://192.168.198.101:8774/v2/%(tenant_id)s' --internalurl='http://192.168.198.101:8774/v2/%(tenant_id)s' --adminurl='http://192.168.198.101:8774/v2/%(tenant_id)s'
创建volume服务,类型为 volume
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=volume --type=volume --description="Nova Volume Service"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[volume_ID] --publicurl='http://192.168.198.101:8776/v1/%(tenant_id)s' --internalurl='http://192.168.198.101:8776/v1/%(tenant_id)s' --adminurl='http://192.168.198.101:8776/v1/%(tenant_id)s'
创建glance服务,类型为 image
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=glance --type=image --description="Glance Image Service"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[glance_ID] --publicurl=http://192.168.198.101:9292/v1 --internalurl=http://192.168.198.101:9292/v1 --adminurl=http://192.168.198.101:9292/v1
创建ec2服务,类型为 ec2
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=ec2 --type=ec2 --description="EC2 Compatibility layer"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[ec2_ID] --publicurl=http://192.168.198.101:8773/services/Cloud --internalurl=http://192.168.198.101:8773/services/Cloud --adminurl=http://192.168.198.101:8773/services/Admin
创建swift服务,类型为 object-store
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ service-create --name=swift --type=object-store --description="Object Storage Service"
创建服务入口(endpoint)
keystone --token admin --endpoint http://192.168.198.101:35357/v2.0/ endpoint-create --region RegionOne --service_id=[swift_ID] --publicurl='https://192.168.198.104:8080/v1/AUTH_%(tenant_id)s' �Cadminurl='https://192.168.198.104:8080/' --internalurl='https://192.168.198.104:8080/v1/AUTH_%(tenant_id)s'
核实Keystone的安装
exportADMIN_TOKEN=admin
exportOS_USERNAME=admin
exportOS_PASSWORD=admin
exportOS_TENANT_NAME=admin
exportOS_AUTH_URL=http://127.0.0.1:5000/v2.0/
然后用 keystone user-list keystone role-list keystone tenant-list来查看
Glance
apt-get install glance
rm /var/lib/glance/glance.sqlite
mysql �Cu root �Cp
mysql> CREATE DATABASE glance;
mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glancepassword';
mysql> quit
更新/etc/glance/glance-api-paste.ini
[filter:authtoken]
admin_tenant_name = service
admin_user = glance
admin_password = glance
增加下面的两行到/etc/glance/glance-api.conf
[paste_deploy]
flavor = keystone
增加下列两行到/etc/glance/glance-registry.conf
[paste_deploy]
flavor = keystone
更新/etc/glance/glance-registry-paste.ini
[filter:authtoken]
admin_tenant_name = service
admin_user = glance
admin_password = glance
更新glance-registry-paste.init pipeline行的内容为
[pipeline:glance-registry]
#pipeline = context registryapp
# NOTE: use the following pipeline for keystone
pipeline = authtoken auth-context context registryapp
更改/etc/glance/glance-registry.conf连接mysql
sql_connection = mysql://glance:[email protected]/glance
glance-manage version_control 0
glance-manage db_sync
service glance-registry restart
service glance-api restart
测试glance
glance index 无输出是正常
下载image
cd ~
wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
上传image
glance add name=cirros-0.3.0-x86_64 is_public=true container_format=bare disk_format=qcow2 < cirros-0.3.0-x86_64-disk.img
glance add name="Ubuntu 12.04 cloudimg amd64" is_public=true container_format=ovf disk_format=qcow2 < precise-server-cloudimg-amd64-disk1.img
glance index
准备配置网络
在一块单一的网卡上使用FlatDHCP网络模式
/etc/network/interfaces
eth0:公网IP、网关
br100: 节点之间的通信
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
# Bridge network interface for VM networks
auto br100
iface br100 inet static
address 10.0.0.1
netmask 255.255.255.0
bridge_stp off
bridge_fd 0
安装bridge-utils
sudo apt-get install bridge-utils
确保Bridge的设置,如果在nova.conf文件中添加了flat_network_bridge=br100,当nova-manage network运行时会自动开启
sudo brctl addbr br100
Nova
mysql �Cu root �Cp
mysql> CREATE DATABASE nova;
mysql> GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novapassword';
mysql> quit
apt-get install rabbitmq-server
apt-get install nova-volume nova-vncproxy nova-api nova-aja
x-console-proxy nova-cert nova-consoleauth nova-doc nova-scheduler nova-network
kvm nova-objectstore nova-compute-kvm
编辑 /etc/nova/api-paste.ini , 修改末尾3行
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
admin_tenant_name = service
admin_user = nova
admin_password = nova
编辑/etc/nova/nova.conf 文件,
[DEFAULT]
###### LOGS/STATE
#verbose=True
verbose=False
###### AUTHENTICATION
auth_strategy=keystone
###### SCHEDULER
#--compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_driver=nova.scheduler.simple.SimpleScheduler
###### VOLUMES
volume_group=nova-volumes
volume_name_template=volume-%08x
iscsi_helper=tgtadm
###### DATABASE
sql_connection=mysql://nova:[email protected]/nova
###### COMPUTE
libvirt_type=kvm
#libvirt_type=qemu
connection_type=libvirt
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
allow_resize_to_same_host=True
libvirt_use_virtio_for_bridges=true
start_guests_on_host_boot=true
resume_guests_state_on_host_boot=true
###### APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
allow_admin_api=true
s3_host=192.168.198.101
cc_host=192.168.198.101
###### RABBITMQ
rabbit_host=192.168.198.101
###### GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=192.168.198.101:9292
###### NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
public_interface=eth0
flat_interface=eth0
flat_network_bridge=br100
fixed_range=10.0.0.0/24
multi_host=true
###### NOVNC CONSOLE
novnc_enabled=true
novncproxy_base_url= http://192.168.198.101:6080/vnc_auto.html
vncserver_proxyclient_address=192.168.198.101
vncserver_listen=192.168.198.101
########Nova
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
#####MISC
use_deprecated_auth=false
root_helper=sudo nova-rootwrap
设置目录权限
chown -R nova:nova /etc/nova
创建重启nova脚本
vim /restart.sh
#!/bin/bash
for a in rabbitmq-server libvirt-bin nova-network nova-cert nova-compute \
nova-api nova-objectstore nova-scheduler nova-volume \
novnc nova-consoleauth; do service "$a" stop; done
for a in rabbitmq-server libvirt-bin nova-network nova-cert nova-compute \
nova-api nova-objectstore nova-scheduler nova-volume \
novnc nova-consoleauth; do service "$a" start; done
bash /restart.sh
同步数据库
nova-manage db sync
创建Fix IP
FIX IP,就是分配给虚拟机的实际IP地址。这些数据都会写入数据库
nova-manage network create private --fixed_range_v4=10.0.0.0/24 --num_networks=1 --bridge=br100 --bridge_interface=eth0 --network_size=256 �Cmulti_host=T
创建floating IP
所谓Floating IP,是亚马逊EC2的定义。简单说,就是公网的IP。他其实是通过类似防火墙类似,做一个映射。实际上是通过iptables来实现映射.
nova-manage floating create �Cip_range=192.168.198.32/27
Dashboard
apt-get install -ymemcached libapache2-mod-wsgi openstack-dashboard
vim /etc/openstack-dashboard/local_settings.py
CACHE_BACKEND = ‘memcached://127.0.0.1:11211/’
mysql �Cu root �Cp
mysql> CREATE DATABASE dash;
mysql> GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'dashpassword';
mysql> quit
然后配置local_settings.py或使用manage.py syncdb命令构建数据库
vim /etc/openstack-dashboard/local_settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dash',
'USER': 'dash',
'PASSWORD': 'dashpassword',
'HOST': '192.168.198.101',
'default-character-set': 'utf8'
},
} 更改这些设置连接到Mysql数据库
$ /usr/share/openstack-dashboard/manage.py syncdb
如果你不想看到apache的警告,创建下面目录在dashboard下
sudo mkdir �Cp /var/lib/dash/.blackhole
重启服务
/etc/init.d/apache2 restart
sudo restart nova-api
计算机节点
sudo apt-get install nova-api nova-network nova-compute nova-common nova-compute-kvm python-nova python-novaclient python-keystone python-keystoneclient
mysql-client
vim /etc/nova/api-paste.ini
admin_tenant_name = service
admin_user = nova
admin_password = chenshake
vim /etc/nova/nova.conf
[DEFAULT]
##### LOGS/STATE
#verbose=True
verbose=False
###### AUTHENTICATION
auth_strategy=keystone
###### SCHEDULER
#--compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_driver=nova.scheduler.simple.SimpleScheduler
###### VOLUMES
volume_group=nova-volumes
volume_name_template=volume-%08x
iscsi_helper=tgtadm
###### DATABASE
sql_connection=mysql://nova:[email protected]/nova
###### COMPUTE
libvirt_type=kvm
#libvirt_type=qemu
connection_type=libvirt
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
allow_resize_to_same_host=True
libvirt_use_virtio_for_bridges=true
start_guests_on_host_boot=true
resume_guests_state_on_host_boot=true
###### APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
allow_admin_api=true
s3_host=192.168.198.101
cc_host=192.168.198.101
###### RABBITMQ
rabbit_host=192.168.198.101
###### GLANCE
image_service=nova.image.glance.GlanceImageService
glance_api_servers=192.168.198.101:9292
###### NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
public_interface=eth0
flat_interface=eth0
flat_network_bridge=br100
fixed_range=10.0.0.0/24
multi_host=true
###### NOVNC CONSOLE
novnc_enabled=true
novncproxy_base_url= http://192.168.198.101:6080/vnc_auto.html
vncserver_proxyclient_address=192.168.198.102
vncserver_listen=192.168.198.102
########Nova
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
#####MISC
use_deprecated_auth=false
root_helper=sudo nova-rootwrap
chown -R nova:nova /etc/nova
vim /restart.sh
#!/bin/bash
for a in libvirt-bin nova-network nova-compute \
nova-api ; do service "$a" stop; done
for a in libvirt-bin nova-network nova-compute \
nova-api ; do service "$a" start; done
bash /restart.sh
代理节点
# apt-get install swift openssh-server rsync memcached python-netifaces python-xattrpython-memcache
mkdir -p /etc/swift
chown -R swift:swift /etc/swift/
创建/etc/swift/swift.conf
[swift-hash]
# random unique string that can neverchange (DO NOT LOSE)
swift_hash_path_suffix = ABCabcABC
apt-get install swift-proxy memcached
创建SSL自签名证书
cd /etc/swift
openssl req -new -x509 -nodes -out cert.crt-keyout cert.key
更改memcached的默认监听接口最好是本地IP不是公网的,在/etc/memcached.conf更改下面行:
-l 127.0.0.1
to
-l <PROXY_LOCAL_NET_IP>
重启memcached服务
service memcached restart
创建/etc/swift/proxy-server.conf
[DEFAULT]
bind_port = 8080
user = swift
[pipeline:main]
pipeline = catch_errors healthcheck cacheauthtoken keystone proxy-server
[app:proxy-server]
use = egg:swift#proxy
account_autocreate = true
[filter:keystone]
paste.filter_factory =keystone.middleware.swift_auth:filter_factory
operator_roles = admin, swiftoperator
[filter:authtoken]
paste.filter_factory =keystone.middleware.auth_token:filter_factory
# Delaying the auth decision is required tosupport token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true
service_port = 5000
service_host = 192.168.198.101
auth_port = 35357
auth_host = 192.168.198.101
auth_token = admin
admin_token = admin
[filter:cache]
use = egg:swift#memcache
set log_name = cache
[filter:catch_errors]
use = egg:swift#catch_errors
[filter:healthcheck]
use = egg:swift#healthcheck
如果运行多个memcached服务,在proxy-server.conf文件中[filter:cache]下设置多个IP:porxy监听。
cd /etc/swift
swift-ring-builder account.builder create18 1 1
swift-ring-builder container.builder create18 1 1
swift-ring-builder object.builder create 181 1
每个存储设备节点添加以下条目到ring:
swift-ring-builder account.builder addz1-192.168.198.103:6002/sda5 100
swift-ring-builder container.builder addz1-192.168.198.103:6001/sda5 100
swift-ring-builder object.builder add z1-192.168.198.103:6000/sda5100
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder
swift-ring-builder account.builderrebalance
swift-ring-builder container.builderrebalance
swift-ring-builder object.builder rebalance
复制account.ring.gz, container.ring.gz, object.ring.gz文件到每一个代理和存储节点的/etc/swift下
确保所有配置文件swift用户都有权限
chown �CRswift:swift /etc/swift
启动代理服务
swift-init proxy start
重新启动存储节点服务
swift-init main start
swift-init rest start
存储节点
# apt-get install swift openssh-server rsync memcached python-netifaces python-xattrpython-memcache
mkdir -p /etc/swift
chown -R swift:swift /etc/swift/
创建/etc/swift/swift.conf
[swift-hash]
# random unique string that can neverchange (DO NOT LOSE)
swift_hash_path_suffix = ABCabcABC
apt-get install swift-account swift-container swift-object xfsprogs
对所有节点设备设置XFS卷(/dev/sdb是这里的实例)
fdisk /dev/sda (set up a single partition)
mkfs.xfs -i size=1024 /dev/sdba
echo "/dev/sda3 /srv/node/sda5 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sda5
mount /srv/node/sda5
chown -R swift:swift /srv/node
创建/etc/rsyncd.conf
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.198.103
[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
编辑/etc/default/rsync中下面的行
RSYNC_ENABLE = true
启动rsync
service rsync start
创建/etc/swift/account-server.conf
[DEFAULT]
bind_ip = 192.168.198.103
workers = 2
[pipeline:main]
pipeline = account-server
[app:account-server]
use = egg:swift#account
[account-replicator]
[account-auditor]
[account-reaper]
创建/etc/swift/container-server.conf
[DEFAULT]
bind_ip = 192.168.198.103
workers = 2
[pipeline:main]
pipeline = container-server
[app:container-server]
use = egg:swift#container
[container-replicator]
[container-updater]
[container-auditor]
[container-sync]
创建/etc/swift/object-server.conf
[DEFAULT]
bind_ip = 192.168.198.103
workers = 2
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
[object-updater]
[object-auditor]
[object-expirer]
启动存储服务
swift-initobject-server start
swift-init object-replicator start
swift-init object-updater start
swift-init object-auditor start
swift-init container-server start
swift-init container-replicatorstart
swift-init container-updater start
swift-init container-auditor start
swift-init account-server start
swift-init account-replicator start
swift-init account-auditor start
Windows2008
这里windows2008我采用qcow2格式来制作镜像,流程和centos类似
用kvm-img创建一个10G大小的镜像文件:
kvm-img create -f qcow2 win2008.img 10G
因为windows没有默认的virtio驱动,所以先下载
wgethttp://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-0.1-15.iso
wget http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/virtio-win-1.1.16.vfd
启动kvm,映射驱动vfd到软盘A
kvm -m 1024 -cdrom en_windows_server_2008_r2_dvd.iso -drivefile=win2008.img,if=virtio,boot=on -fda virtio-win-1.1.16.vfd -boot d-nographic -vnc :1
用vnc访问安装,在安装的时候需要选择一下硬盘驱动,安装好以后,停掉虚拟机,重新用以下命令启动
kvm -m 1024 -drive file=win2008.img,if=virtio,boot=on -cdromvirtio-win-0.1-15.iso -net nic,model=virtio -net user -boot c -nographic -vnc:1
再用vnc访问,应该会提示自动安装好了virtio的网卡驱动
用glance添加这个镜像,指定格式为qcow2
glance add -A your_glance_token name="win2008" is_public=true disk_format=qcow2< win2008.img
这样就完成了
完成镜像以后,我们还可以在securitygroup里面添加3389和22端口,这样能够允许rdp和ssh访问对应的