OpenStack 是一系列开源工具(或开源项目)的组合,主要使用池化虚拟资源来构建和管理私有云及公共云。其中的六个项目主要负责处理核心云计算服务,包括计算、网络、存储、身份和镜像服务。还有另外十多个可选项目,用户可把它们捆绑打包,用来创建独特、可部署的云架构。
一、IaaS:基础设施即服务(个人比较习惯的):用户通过网络获取虚机、存储、网络,然后用户根据自己的需求操作获取的资源
二、PaaS:平台即服务:将软件研发平台作为一种服务, 如Eclipse/Java编程平台,服务商提供编程接口/运行平台等
三、SaaS:软件即服务 :将软件作为一种服务通过网络提供给用户,如web的电子邮件、HR系统、订单管理系统、客户关系系统等。用户无需购买软件,而是向提供商租用基于web的软件,来管理企业经营活动
OpenStack 架构由大量开源项目组成。其中包含 6 个稳定可靠的核心服务,用于处理计算、网络、存储、身份和镜像; 同时,还为用户提供了十多种开发成熟度各异的可选服务。OpenStack 的 6 个核心服务主要担纲系统的基础架构,其余项目则负责管理控制面板、编排、裸机部署、信息传递、容器及统筹管理等操作。
通过消息队列和数据库,各个组件可以相互调用,互相通信。每个项目都有各自的特性,大而全的架构并非适合每一个用户,如Glance在最早的A、B版本中并没有实际出现应用,Nova可以脱离镜像服务独立运行。当用户的云计算规模大到需要管理多种镜像时,才需要像Glance这样的组件。
OpenStack的逻辑架构
通过登录界面dashboard或命令行CLI通过RESTful API
向keystone
获取认证信息。
keystone
通过用户请求认证信息,并生成auth-token
返回给对应的认证请求。
auth-token
通过RESTful API
向nova-api
发送一个boot instance
的请求。nova-api
接受请求后向keystone
发送认证请求,查看token是否为有效用户和token。nova-api
。nova-api
调用rabbitmq
,向nova-scheduler
请求是否有创建虚拟机的资源(node主机)。
nova-scheduler
进程侦听消息队列,获取nova-api的请求。
nova-scheduler
通过查询nova数据库中计算资源的情况,并通过调度算法计算符合虚拟机创建需要的主机。
nova-scheduler
通过rpc调用向nova-compute
发送对应的创建虚拟机请求的消息。nova-compute
会从对应的消息队列中获取创建虚拟机请求的消息。nova-compute
通过rpc调用向nova-conductor
请求获取虚拟机消息。(Flavor)nova-conductor
从消息队队列中拿到nova-compute
请求消息。nova-conductor
根据消息查询虚拟机对应的信息。nova-conductor
从数据库中获得虚拟机对应信息。nova-conductor
把虚拟机信息通过消息的方式发送到消息队列中。nova-compute
从对应的消息队列中获取虚拟机信息消息。glance-api
向keystone
认证token是否有效,并返回验证结果。nova-compute
获得虚拟机镜像信息(URL)。nova-compute
请求neutron-server
获取创建虚拟机所需要的网络信息。neutron-server
向keystone认证token是否有效,并返回验证结果。nova-compute
获得虚拟机网络信息。nova-compute
请求cinder-api
获取创建虚拟机所需要的持久化存储信息。cinder-api
向keystone认证token是否有效,并返回验证结果。nova-compute
获得虚拟机持久化存储信息。nova-compute
根据instance的信息调用配置的虚拟化驱动来创建虚拟机。1、环境布署
2、配置keystone服务
3、配置glance服务
4、配置placement服务
5、配置nova服务控制节点
6、配置nova服务计算节点
7、配置neutron服务控制节点
8、配置neutron服务计算节点
9、创建实例
10、配置dashboard服务
图中数字,如10,表示ip:192.168.99.10
系统:
计算节点:centos 7.2.1511
其它:centos 7.6.1810
准备yum源:/etc/yum.repos.d/openstack.repo
yum install centos-release-openstack-stein
yum install python-openstackclient openstack-selinux
hostnamectl set-hostname 主机名
cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
ntpdate time3.aliyun.com && hwclock -w
echo "192.168.99.100 openstackvip.com" >> /etc/hosts
这里设置你的vip和你的域名
cd /etc/sysconfig/network-scripts/
vim ifcfg-bond0
BOOTPROTO=static
NAME=bond0
DEVICE=bond0
ONBOOT=yes
BONDING_MASTER=yes
BONDING_OPTS="mode=1 miimon=100" #指定绑定类型为1及链路状态监测间隔时间
IPADDR=192.168.99.101
NETMASK=255.255.255.0
GATEWAY=192.168.99.2
DNS1=202.106.0.20
eth0配置:
vim ifcfg-eth0
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
USERCTL=no
SLAVE=yes
eth1配置:
vim ifcfg-eth1
BOOTPROTO=static
NAME=eth1
DEVICE=eth1
ONBOOT=yes
NM_CONTROLLED=no
MASTER=bond0
USERCTL=no
SLAVE=yes
- 在数据库节点上配置
yum -y install mariadb mariadb-server
vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.99.106
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
systemctl enable mariadb.service
systemctl restart mariadb.service
mysql_secure_installation
- 在数据库节点上配置
yum -y install memcached python-memcached
vim /etc/sysconfig/memcached
替换下面这句
OPTIONS="-l 192.168.99.106"
systemctl enable memcached.service
systemctl restart memcached.service
- 在数据库节点上配置
yum -y install rabbitmq-server
systemctl enable rabbitmq-server
systemctl restart rabbitmq-server
rabbitmqctl add_user openstack 123
授权
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
rabbitmq-plugins enable rabbitmq_management
rabbitmq-plugins list
- 在haproxy节点上配置
yum -y install keepalived haproxy
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ha_1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_iptables
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.99.100 dev eth0 label eth0:1
}
}
systemctl restart keepalived
systemctl enable keepalived
vim /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 180s
timeout queue 10m
timeout connect 180s
timeout client 10m
timeout server 10m
timeout http-keep-alive 180s
timeout check 10s
maxconn 3000
listen stats
mode http
bind :9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:123
listen dashboard
bind :80
mode http
balance source
server dashboard 192.168.99.106:80 check inter 2000 fall 3 rise 5
listen mysql
bind :3306
mode tcp
balance source
server mysql 192.168.99.106:3306 check inter 2000 fall 3 rise 5
listen memcached
bind :11211
mode tcp
balance source
server memcached 192.168.99.106:11211 inter 2000 fall 3 rise 5
listen rabbit
bind :5672
mode tcp
server rabbit 192.168.99.106:5672 inter 2000 fall 3 rise 5
listen rabbit_web
bind :15672
mode http
server rabbit_web 192.168.99.106:15672 inter 2000 fall 3 rise 5
listen keystone
bind :5000
mode tcp
server keystone 192.168.99.101:5000 inter 2000 fall 3 rise 5
listen glance
bind :9292
mode tcp
server glance 192.168.99.101:9292 inter 2000 fall 3 rise 5
listen placement
bind :8778
mode tcp
server placement 192.168.99.101:8778 inter 2000 fall 3 rise 5
listen neutron
bind :9696
mode tcp
server neutron 192.168.99.101:9696 inter 2000 fall 3 rise 5
listen nova
bind :8774
mode tcp
server nova 192.168.99.101:8774 inter 2000 fall 3 rise 5
listen VNC
bind :6080
mode tcp
server VNC 192.168.99.101:6080 inter 2000 fall 3 rise 5
systemctl restart haproxy
systemctl enable haproxy
查检下端口
ss -tnl
# 输出
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:5000 *:*
LISTEN 0 128 *:5672 *:*
LISTEN 0 128 *:8778 *:*
LISTEN 0 128 *:3306 *:*
LISTEN 0 128 *:11211 *:*
LISTEN 0 128 *:9292 *:*
LISTEN 0 128 *:9999 *:*
LISTEN 0 128 *:80 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 *:15672 *:*
LISTEN 0 100 127.0.0.1:25 *:*
LISTEN 0 128 *:6080 *:*
LISTEN 0 128 *:9696 *:*
LISTEN 0 128 *:8774 *:*
LISTEN 0 128 :::22 :::*
LISTEN 0 100 ::1:25 :::*
echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf
启动haproxy的时候,允许忽视VIP的存在
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
允许ip转发
sysctl -p
使之生效
keystone数据库配置
[mysql]$ mysql -uroot -p123
MariaDB [(none)]> create database keystone;
MariaDB [(none)]> grant all on keystone.* to keystone@'%' identified by '123';
yum -y install python2-PyMySQL mariadb
控制端上测试
mysql -ukeystone -h 192.168.99.106 -p123
在控制端添加host文件:/etc/hosts
192.168.99.100 openstackvip.com
配置keystone
yum -y install openstack-keystone httpd mod_wsgi python-memcached
openssl rand -hex 10
输出,记住ta,有用
db148a2487000ad12b90
/etc/keystone/keystone.conf
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/keystone/keystone.conf
vim /etc/keystone/keystone.conf
[DEFAULT]
admin_token = db148a2487000ad12b90
[access_rules_config]
[application_credential]
[assignment]
[auth]
[cache]
[catalog]
[cors]
[credential]
[database]
connection = mysql+pymysql://keystone:[email protected]/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_receipts]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[jwt_tokens]
[ldap]
[memcache]
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[policy]
[profiler]
[receipt]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider = fernet
[tokenless_auth]
[trust]
[unified_limit]
[wsgi]
su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
验证:
ls /etc/keystone/fernet-keys/
1 0
/etc/httpd/conf/httpd.conf
Servername controller:80
sed -i '1s#$#\nServername controller:80#' /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl restart httpd.service
export OS_TOKEN=db148a2487000ad12b90
export OS_URL=http://openstackvip.com:5000/v3
export OS_IDENTITY_API_VERSION=3
验证下:
openstack domain list
The request you have made requires authentication. (HTTP 401) (Request-ID: req-03ea8186-0af9-4fa8-ba53-d043cd28e2c0)
这里出错了,检查下你的token,OS_TOKEN设置变量的时候是不是没有跟你在/etc/keystone/keystone.conf
配置文件中设置的TOKEN的一样,改成一样的就可以了。
openstack domain list
输出是空的就对了,因为我们还没有添加
openstack domain create --description "exdomain" default
openstack project create --domain default \
--description "Admin Project" admin
openstack user create --domain default --password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Demo project" demo
openstack user create --domain default --password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
openstack project create --domain default --description "service project" service
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin
openstack service create --name keystone --description "openstack identify" identity
openstack service list
openstack endpoint create --region RegionOne identity public http://openstackvip.com:5000/v3
openstack endpoint create --region RegionOne identity internal http://openstackvip.com:5000/v3
管理端点
openstack endpoint create --region RegionOne identity admin http://openstackvip.com:5000/v3
unset OS_TOKEN
openstack --os-auth-url http://openstackvip.com:5000/v3 \
--os-project-domain-name default \
--os-user-domain-name default \
--os-project-name admin \
--os-username admin token issue
keystone_admin.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123
export OS_AUTH_URL=http://openstackvip.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
demo用户脚本keystone_demo.sh
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123
export OS_AUTH_URL=http://openstackvip.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Glance是Openstack镜像服务组件,监听在9292端口,接收REST API请求,通过其它模块来完成镜像的获取,上传,删除等。
在创建虚拟机的时候,先把镜像上传到glace,
glance-api接收镜像的删除、上传和读取;
glance-registry(port:9191)与mysql交互,存储获取镜像的元数据。
glance数据库有两张表,一张image表,一张image property表:保存了镜像格式、大小等信息
image store是一个存储的接口层,通过这个接口glance可以获取镜像
yum -y install openstack-glance
mysql -uroot -p123
MariaDB [(none)]> create database glance;
MariaDB [(none)]> grant all on glance.* to 'glance'@'%' identified by '123';
验证glance用户连接
mysql -hopenstackvip.com -uglance -p123
/etc/glance/glance-api.conf
sed -i -e '/^#/d' -e '/^$/d' /etc/glance/glance-api.conf
vim /etc/glance/glance-api.conf
最终如下
[DEFAULT]
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[file]
[glance.store.http.store]
[glance.store.rbd.store]
[glance.store.sheepdog.store]
[glance.store.swift.store]
[glance.store.vmware_datastore.store]
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[image_format]
[keystone_authtoken]
auth_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
/etc/glance/glance-registry.conf
sed -i -e '/^#/d' -e '/^$/d' /etc/glance/glance-registry.conf
vim /etc/glance/glance-registry.conf
最终如下
[DEFAULT]
[database]
connection = mysql+pymysql://glance:[email protected]/glance
[keystone_authtoken]
auth_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = 123
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl restart openstack-glance-api.service openstack-glance-registry.service
source keystone_admin.sh
openstack service create --name glance --description "OpenStack Image" image
openstack endpoint create --region RegionOne image public http://openstackvip.com:9292
openstack endpoint create --region RegionOne image internal http://openstackvip.com:9292
openstack endpoint create --region RegionOne image admin http://openstackvip.com:9292
openstack endpoint list
glance image-list
#与
openstack image list
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
openstack image create "cirros" \
--file /root/cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 \
--container-format bare \
--public
验证glance镜像
glance image-list
#和
openstack image list
openstack image show cirros
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2019-08-22T06:20:18Z |
| disk_format | qcow2 |
| file | /v2/images/7ae353f8-db19-4449-b4ac-df1e70fe96f7/file |
| id | 7ae353f8-db19-4449-b4ac-df1e70fe96f7 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 7cbf02c5e55f43938062a9e31e9ea4bb |
| properties | os_hash_algo='sha512', os_hash_value='1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2019-08-22T06:20:19Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
mysql -u root -p
MariaDB [(none)]> CREATE DATABASE placement;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' \
IDENTIFIED BY '123';
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
openstack service create --name placement \
--description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://openstackvip.com:8778
openstack endpoint create --region RegionOne placement internal http://openstackvip.com:8778
openstack endpoint create --region RegionOne placement admin http://openstackvip.com:8778
yum -y install openstack-placement-api
/etc/placement/placement.conf
sed -i -e '/^#/d' -e '/^$/d' /etc/placement/placement.conf
vim /etc/placement/placement.conf
[DEFAULT]
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://openstackvip.com:5000/v3
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = 123
[placement]
[placement_database]
connection = mysql+pymysql://placement:[email protected]/placement
su -s /bin/sh -c "placement-manage db sync" placement
systemctl restart httpd
placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
nova分为控制节点和计算节点,计算节点通过nova computer进行虚拟机创建,通过libvirt调用kvm创建虚拟机,nova之间通信通过rabbitMQ队列进行通信
其组件和功能如下:
API:负责接收和响应外部请求。
Scheduler:负责调度虚拟机所在的物理机。
Conductor:计算节点访问数据库的中间件。
Consoleauth:用于控制台的授权认证。
Novncproxy:VNC 代理,用于显示虚拟机操作终端。
Nova-API的功能:
Nova-api组件实现了restful API的功能,接收和响应来自最终用户的计算API请求,接收外部的请求并通过message queue将请求发动给其他服务组件,同时也兼容EC2 API,所以也可以使用EC2的管理工具对nova进行日常管理。
nova scheduler:
nova scheduler模块在openstack中的作用是决策虚拟机创建在哪个主机(计算节点)上。决策一个虚拟机应该调度到某物理节点,需要分为两个步骤:
过滤(filter):过滤出可以创建虚拟机的主机
计算权值(weight):根据权重大进行分配,默认根据资源可用空间进行权重排序
在数据库服务器操作
mysql -uroot -p123
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123';
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123';
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123';
MariaDB [(none)]> flush privileges;
yum -y install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler
openstack service create --name nova \
--description "OpenStack Compute" compute
openstack endpoint create --region RegionOne \
compute public http://openstackvip.com:8774/v2.1
openstack endpoint create --region RegionOne \
compute internal http://openstackvip.com:8774/v2.1
openstack endpoint create --region RegionOne \
compute admin http://openstackvip.com:8774/v2.1
/etc/nova/nova.conf
sed -i -e '/^#/d' -e '/^$/d' /etc/nova/nova.conf
vim /etc/nova/nova.conf
详细配置:
[DEFAULT]
my_ip = 192.168.99.101
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
rpc_backend=rabbit
[api]
auth_strategy=keystone
[api_database]
connection = mysql+pymysql://nova:[email protected]/nova_api
[database]
connection = mysql+pymysql://nova:[email protected]/nova
[glance]
api_servers = http://openstackvip.com:9292
[keystone_authtoken]
auth_url = http://openstackvip.com:5000/v3
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://openstackvip.com:5000/v3
username = placement
password = 123
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
vim /etc/httpd/conf.d/00-placement-api.conf
最下方添加以下配置:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
systemctl restart httpd
#nova_api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
#nova cell0数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#nova cell1 数据库
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#nova数据库
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
chmod a+x nova-restart.sh
nova service-list
我的计算节点ip:192.168.99.10
- 计算节点
yum -y install openstack-nova-compute
sed -i -e '/^#/d' -e '/^$/d' /etc/nova/nova.conf
vim /etc/nova/nova.conf
#全部配置:
[DEFAULT]
my_ip = 192.168.99.23
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:[email protected]
[api]
auth_strategy=keystone
[glance]
api_servers=http://openstackvip.com:9292
[keystone_authtoken]
auth_url = http://openstackvip.com:5000/v3
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name = RegionOne
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
auth_url = http://openstackvip.com:5000/v3
username = placement
password = 123
[vnc]
enabled=true
server_listen=0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url=http://openstackvip.com:6080/vnc_auto.html
egrep -c '(vmx|svm)' /proc/cpuinfo
40
如果此命令返回值zero,则您的计算节点不支持硬件加速,您必须配置libvirt为使用QEMU而不是KVM。
编辑文件中的[libvirt]部分,/etc/nova/nova.conf
如下所示:
[libvirt]
# ...
virt_type = qemu
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service openstack-nova-compute.service
source admin-openstack.sh
openstack compute service list --service nova-compute
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
定期主动发现
vim /etc/nova/nova.conf
加上这条
[scheduler]
discover_hosts_in_cells_interval=300
bash nova-restart.sh
下面是验证:
[controller]$ openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2019-08-23T03:24:19.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2019-08-23T03:24:19.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2019-08-23T03:24:13.000000 |
| 6 | nova-compute | note1 | nova | enabled | up | 2019-08-23T03:24:19.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+
openstack catalog list
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 7ae353f8-db19-4449-b4ac-df1e70fe96f7 | cirros | active |
+--------------------------------------+--------+--------+
nova-status upgrade check
- 在数据库服务器上创建
要创建数据库,请完成以下步骤
mysql -u root -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY '123';
- 在控制端上
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron \
--description "OpenStack Networking" network
openstack endpoint create --region RegionOne \
network public http://openstackvip.com:9696
openstack endpoint create --region RegionOne \
network internal http://openstackvip.com:9696
openstack endpoint create --region RegionOne \
network admin http://openstackvip.com:9696
配置网络选项
5. 安装组件
yum -y install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
配置服务器组件
6. 编辑neutron
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[cors]
[database]
connection = mysql+pymysql://neutron:[email protected]/neutron
[keystone_authtoken]
www_authenticate_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
[nova]
auth_url = http://openstackvip.com:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123
[nova]
这个选项没有,要手动加,在结尾加
配置模块化第2层(ML2)插件
7. 编辑ml2_conf.ini文件
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
配置Linux桥代理
8. 编辑linuxbridge_agent.ini文件
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
/etc/sysctl.conf
文件echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
生效
sysctl -p
这里会报错,不管
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
配置DHCP代理
10. 编辑dhcp_agent.ini文件
vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置元数据代理
11. 编辑metadata_agent.ini文件
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = 192.168.99.101
metadata_proxy_shared_secret = 123
nova_metadata_host写控制端ip,这里我们写vip,再由ha反向代理回来
metadata_proxy_shared_secret为元数据代理的密码
配置Compute服务以使用Networking服务
12. 编辑/etc/nova/nova.conf
文件
vim /etc/nova/nova.conf
在最后加上
[neutron]
url = http://openstackvip.com:9696
auth_url = http://openstackvip.com:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123
service_metadata_proxy = true
metadata_proxy_shared_secret = 123
metadata_proxy_shared_secret 这是我们第11条里配置的密码
/etc/neutron/plugin.ini
指向ML2插件配置文件的符号链接/etc/neutron/plugins/ml2/ml2_conf.ini
。ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
注:如果选择了Self-service networks
,就需要启动第3层服务,我们选择的是Provider networks
所以不需要
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
- 计算节点上
yum -y install openstack-neutron-linuxbridge ebtables ipset
配置公共组件
2. 编辑neutron.conf文件
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/neutron.conf
vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
[cors]
[database]
[keystone_authtoken]
www_authenticate_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
配置Linux桥代理
3. 编辑linuxbridge_agent.ini文件
sed -i.bak -e '/^#/d' -e '/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
确保您的Linux操作系统内核支持网桥过滤器
4. 配置/etc/sysctl.conf
文件
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
生效
sysctl -p
配置Compute服务以使用Networking服务
5. 编辑nova.conf文件
vim /etc/nova/nova.conf
[neutron]
url = http://openstackvip.com:9696
auth_url = http://openstackvip.com:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
- 控制节点
openstack extension list --network
openstack network agent list
创建网络
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider
验证:
openstack network list
#或
neutron net-list
openstack subnet create --network provider \
--allocation-pool start=192.168.99.200,end=192.168.99.210 \
--dns-nameserver 192.168.99.2 --gateway 192.168.99.2 \
--subnet-range 192.168.99.0/24 provider-sub
–network需要写你上面创建的网络名
provider-sub是子网名
验证:
openstack subnet list
#或
neutron subnet-list
openstack flavor create --id 0 --vcpus 1 --ram 1024 --disk 10 m1.nano
–vcpus :几个核的cpu
–ram :内存(单位M)
–disk :储存(单位G)
最后为类型名;
查看类型列表
openstack flavor list
生成密钥对
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
验证密钥对的添加
openstack keypair list
添加安全组规则
2. 允许ICMP(ping)
openstack security group rule create --proto icmp default
openstack security group rule create --proto tcp --dst-port 22 default
openstack flavor list
查看镜像
openstack image list
列出可用网络
openstack network list
列出可用的安全组:
openstack security group list
openstack server create --flavor m1.nano --image cirros \
--nic net-id=a57d2907-a59d-4422-b231-8d3c788d10d3 \
--security-group default \
--key-name mykey provider-instance
–flavor: 类型名称
–image: 镜像名称
–security-group:安全组名
PROVIDER_NET_ID替换网络ID
最后provider-instance是实例名
openstack server list
openstack console url show provider-instance
provider-instance是你的实例名称
在浏览器使用url来连接实例
horizon是openstack的管理其他组件的图形显示和操作界面,通过API和其他服务进行通讯,如镜像服务、计算服务和网络服务等结合使用,horizon基于python django开发,通过Apache的wsgi模块进行web访问通信,Horizon只需要更改配置文件连接到keyston即可
yum -y install openstack-dashboard
/etc/openstack-dashboard/local_settings
文件OPENSTACK_HOST = "192.168.99.101"
OPENSTACK_HOST写控制端本机的IP
启用Identity API版本3
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
配置user为通过仪表板创建的用户的默认角色:
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
接受所有主机
ALLOWED_HOSTS = ['*']
配置memcached会话存储服务
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'openstackvip.com:11211',
}
}
启用对域的支持:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
配置API版本:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
配置Default为通过仪表板创建的用户的默认域:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "default"
如果选择网络选项1,请禁用对第3层网络服务的支持:
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_': False,
'enable_fip_topology_check': False,
}
(可选)配置时区:
TIME_ZONE = "UTC"
sed一键配置
sed -i.bak '/^OPENSTACK_HOST/s#127.0.0.1#192.168.99.101#' /etc/openstack-dashboard/local_settings
sed -i '/^OPENSTACK_KEYSTONE_DEFAULT_ROLE/s#".*"#"user"#' /etc/openstack-dashboard/local_settings
sed -i "/^ALLOWED_HOSTS/s#\[.*\]#['*']#" /etc/openstack-dashboard/local_settings
sed -i '/^#SESSION_ENGINE/s/#//' /etc/openstack-dashboard/local_settings
sed -i "/^SESSION_ENGINE/s#'.*'#'django.contrib.sessions.backends.cache'#" /etc/openstack-dashboard/local_settings
sed -i "/^# 'default'/s/#//" /etc/openstack-dashboard/local_settings
sed -i "/^#CACHES/,+6s/#//" /etc/openstack-dashboard/local_settings
sed -i "/^ 'LOCATION'/s#127.0.0.1#openstackvip.com#" /etc/openstack-dashboard/local_settings
sed -i "/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT/s/#//" /etc/openstack-dashboard/local_settings
sed -i "/OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT/s#False#True#" /etc/openstack-dashboard/local_settings
sed -i "/OPENSTACK_API_VERSIONS/,+5s/#//" /etc/openstack-dashboard/local_settings
sed -i '/"compute"/d' /etc/openstack-dashboard/local_settings
sed -i '/^#OPENSTACK_KEYSTONE_DEFAULT_DOMAIN/s/#//' /etc/openstack-dashboard/local_settings
sed -i '/^OPENSTACK_KEYSTONE_DEFAULT_DOMAIN/s/Default/default/' /etc/openstack-dashboard/local_settings
sed -i '/^OPENSTACK_NEUTRON_NETWORK/,+7s#True#False#' /etc/openstack-dashboard/local_settings
sed -i '/TIME_ZONE/s#UTC#UTC#' /etc/openstack-dashboard/local_settings
sed -i "/^OPENSTACK_NEUTRON_NETWORK/s/$/\n 'enable_lb': False,/" /etc/openstack-dashboard/local_settings
sed -i "/^OPENSTACK_NEUTRON_NETWORK/s/$/\n 'enable_firewall': False,/" /etc/openstack-dashboard/local_settings
sed -i "/^OPENSTACK_NEUTRON_NETWORK/s/$/\n 'enable_': False,/" /etc/openstack-dashboard/local_settings
继续配置下面的
/etc/httpd/conf.d/openstack-dashboard.conf
vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
systemctl restart httpd.service
memcached我安装在其它机器上
systemctl restart memcached.service
IP:192.168.99.105
yum -y install nfs-utils
useradd openstack
echo "/var/lib/glance/images 192.168.99.0/24(rw,all_squash,anonuid=`id -u openstack`,anongid=`id -g openstack`)" > /etc/exports
mkdir -p /var/lib/glance/images
systemctl restart nfs-server
systemctl enable nfs-server
exportfs -r
showmount -e
chown -R openstack.openstack /var/lib/glance/images/
showmount -e 192.168.99.115
在挂载之前先保存下镜像
mkdir /data ; mv /var/lib/glance/images/* /data
挂载
echo "192.168.99.115:/var/lib/glance/images /var/lib/glance/images nfs defaults 0 0" >> /etc/fstab
mount -a
再把镜像移回来
mv /data/* /var/lib/glance/images
需要的包haproxy + keepalived
在前面已经做了一台haproxy+keepalived,所以我们需要再加一台物理机,做backup。
IP: 192.168.99.104
开始配置
yum -y install keepalived haproxy
vim /etc/keepavlied/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ha_1
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_iptables
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.99.100 dev eth0 label eth0:1
}
}
systemctl start keepalived
systemctl enable keepalived
在配置之前要看下需要做反向代理的端口
PORT | 服务 |
---|---|
5000 | keystone |
9292 | glance |
8778 | placement |
8774 | nova |
9696 | neutron |
6080 | VNC |
3306 | MySQL |
5672 | rabbitMQ |
15672 | rabbitMQ_WEB |
11211 | memcached |
这个配置在ha_1上也要加上
vim /etc/haproxy/haproxy.conf
listen stats
mode http
bind :9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:123
listen dashboard
bind :80
mode http
balance source
server dashboard 192.168.99.101:80 check inter 2000 fall 3 rise 5
server dashboard 192.168.99.103:80 check inter 2000 fall 3 rise 5
listen mysql
bind :3306
mode tcp
balance source
server mysql 192.168.99.106:3306 check inter 2000 fall 3 rise 5
listen memcached
bind :11211
mode tcp
balance source
server memcached 192.168.99.106:11211 inter 2000 fall 3 rise 5
listen rabbit
bind :5672
mode tcp
balance source
server rabbit 192.168.99.106:5672 inter 2000 fall 3 rise 5
listen rabbit_web
bind :15672
mode http
server rabbit_web 192.168.99.106:15672 inter 2000 fall 3 rise 5
listen keystone
bind :5000
mode tcp
server keystone 192.168.99.101:5000 inter 2000 fall 3 rise 5
server keystone 192.168.99.103:5000 inter 2000 fall 3 rise 5
listen glance
bind :9292
mode tcp
server glance 192.168.99.101:9292 inter 2000 fall 3 rise 5
server glance 192.168.99.103:9292 inter 2000 fall 3 rise 5
listen placement
bind :8778
mode tcp
server placement 192.168.99.101:8778 inter 2000 fall 3 rise 5
server placement 192.168.99.103:8778 inter 2000 fall 3 rise 5
listen neutron
bind :9696
mode tcp
server neutron 192.168.99.101:9696 inter 2000 fall 3 rise 5
server neutron 192.168.99.103:9696 inter 2000 fall 3 rise 5
listen nova
bind :8774
mode tcp
server nova 192.168.99.101:8774 inter 2000 fall 3 rise 5
server nova 192.168.99.103:8774 inter 2000 fall 3 rise 5
listen VNC
bind :6080
mode tcp
server VNC 192.168.99.101:6080 inter 2000 fall 3 rise 5
server VNC 192.168.99.103:6080 inter 2000 fall 3 rise 5
要实现高可以用,要再准备一台物理机,设置主机名为controller2,
IP:192.168.99.113
从controller1准备这些文件
$ ls
admin.keystone* glance.tar keystone.tar placement.tar
dashboard.tar http_conf_d.tar neutron.tar yum/
demo.keystone* install_controller_openstack.sh* nova.tar
最终如图,yum源是centos安装时自带,如果你删除了也要从其它主机拷贝过来
准备的过程(在原有的controller上)
#准备httpd
cd /etc/httpd/conf.d
tar cf /root/http_conf_d.tar *
#准备keystone
cd /etc/keystone
tar cf /root/keystone.tar *
#准备glance
cd /etc/glance
tar cf /root/glance.tar *
#准备placement
cd /etc/placement
tar cf /root/placement.tar *
#准备nova
cd /etc/nova
tar cf /root/nova.tar *
#准备neutron
cd /etc/neutron
tar cf /root/neutron.tar *
#准备dashboard
cd /etc/openstack-dashboard
tar cf /root/dashboard.tar *
admin.keystone
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123
export OS_AUTH_URL=http://openstackvip.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
demo.keystone
#!/bin/bash
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=123
export OS_AUTH_URL=http://openstackvip.com:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
脚本内容,要先设置好主机名,主机名不能包含_
下划线
#!/bin/bash
gecho() {
echo -e "\e[1;32m${1}\e[0m" && sleep 1
}
recho() {
echo -e "\e[1;31m${1}\e[0m" && sleep 1
}
gecho "配置yum源..."
PWD=`dirname $0`
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/* /etc/yum.repos.d/bak/
mv $PWD/yum/* /etc/yum.repos.d/
yum -y install centos-release-openstack-stein
gecho "安装openstack客户端、openstack SELinux管理包..."
yum -y install python-openstackclient openstack-selinux
yum -y install python2-PyMySQL mariadb
yum -y install openstack-keystone httpd mod_wsgi python-memcached
tar xf http_conf_d.tar -C /etc/httpd/conf.d
echo "192.168.99.211 openstackvip.com" >> /etc/hosts
echo "192.168.99.211 controller" >> /etc/hosts
gecho "安装keystone..."
tar xf $PWD/keystone.tar -C /etc/keystone
systemctl enable httpd.service
systemctl start httpd.service
gecho "安装glance..."
yum -y install openstack-glance
tar xf $PWD/glance.tar -C /etc/glance
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
gecho "安装placement..."
yum -y install openstack-placement-api
tar xf $PWD/placement.tar -C /etc/placement
gecho "安装nova。。。"
yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
tar xf $PWD/nova.tar -C /etc/nova
systemctl restart httpd
systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
cat > /root/nova-restart.sh <<EOF
#!/bin/bash
systemctl restart openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
EOF
chmod a+x /root/nova-restart.sh
gecho "安装neutron。。。"
yum -y install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
tar xf $PWD/neutron.tar -C /etc/neutron
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
sysctl -p
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
gecho "安装dashboard..."
yum -y install openstack-dashboard
tar xf $PWD/dashboard.tar -C /etc/openstack-dashboard
systemctl restart httpd.service
recho "5秒后重启系统..."
for i in `seq 5 -1 1` ; do
tput sc
echo -n $i
sleep 1
tput rc
tput ed
done
reboot
把之前所有的/etc/hosts改成
192.168.99.211 openstackvip.com
192.168.99.211 controller
新的物理机,安装好centos7.2,配置好IP地址与主机名。
准备这些包
准备
#准备neutron,在你原来的node节点上
cd /etc/neutron
tar cf /root/neutron-compute.tar *
#准备nova,在你原来的node节点上
cd /etc/nova
tar cf /root/nova-compute.tar *
文件limits.conf
# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific.
#That means for example that setting a limit for wildcard domain here
#can be overriden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overriden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
# -
#
#Where:
# can be:
# - a user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
# can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#- can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open file descriptors
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
# -
#
#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# End of file
* soft core unlimited
* hard core unlimited
* soft nproc 1000000
* hard nproc 1000000
* soft nofile 1000000
* hard nofile 1000000
* soft memlock 32000
* hard memlock 32000
* soft msgqueue 8192000
* hard msgqueue 8192000
文件profile
# /etc/profile
# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc
# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.
pathmunge () {
case ":${PATH}:" in
*:"$1":*)
;;
*)
if [ "$2" = "after" ] ; then
PATH=$PATH:$1
else
PATH=$1:$PATH
fi
esac
}
if [ -x /usr/bin/id ]; then
if [ -z "$EUID" ]; then
# ksh workaround
EUID=`id -u`
UID=`id -ru`
fi
USER="`id -un`"
LOGNAME=$USER
MAIL="/var/spool/mail/$USER"
fi
# Path manipulation
if [ "$EUID" = "0" ]; then
pathmunge /usr/sbin
pathmunge /usr/local/sbin
else
pathmunge /usr/local/sbin after
pathmunge /usr/sbin after
fi
HOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; then
export HISTCONTROL=ignoreboth
else
export HISTCONTROL=ignoredups
fi
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then
umask 002
else
umask 022
fi
for i in /etc/profile.d/*.sh ; do
if [ -r "$i" ]; then
if [ "${-#*i}" != "$-" ]; then
. "$i"
else
. "$i" >/dev/null
fi
fi
done
unset i
unset -f pathmunge
export HISTTIMEFORMAT="%F %T `whoami` "
文件sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
脚本openstack_node_script.sh
#!/bin/bash
gecho() {
echo -e "\e[1;32m${1}\e[0m" && sleep 1
}
recho() {
echo -e "\e[1;31m${1}\e[0m" && sleep 1
}
vip=192.168.99.211
controller_ip=192.168.99.211
gecho "配置yum源"
PWD=`dirname $0`
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/* /etc/yum.repos.d/bak/
mv $PWD/yum/* /etc/yum.repos.d/
gecho "安装包..."
yum -y install centos-release-openstack-stein
yum -y install python-openstackclient openstack-selinux
yum -y install openstack-nova-compute
yum -y install openstack-neutron-linuxbridge ebtables ipset
cat $PWD/limits.conf > /etc/security/limits.conf
cat $PWD/profile > /etc/profile
cat $PWD/sysctl.conf > /etc/sysctl.conf
gecho "配置nova"
tar xvf $PWD/nova-compute.tar -C /etc/nova/
myip=`ifconfig eth0 | awk '/inet /{print $2}'`
sed -i "/my_ip =/s#.*#my_ip = ${myip}#" /etc/nova/nova.conf
gecho "配置neutron"
tar xf neutron-compute.tar -C /etc/neutron
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
sysctl -p
echo "${vip} openstackvip.com" >> /etc/hosts
echo "${controller_ip} controller" >> /etc/hosts
vcpu=${egrep -c '(vmx|svm)' /proc/cpuinfo}
if [ vcpu -eq 0 ] ; then
cat >> /etc/nova/nova.conf <<EOF
[libvirt]
virt_type = qemu
EOF
fi
gecho "启动服务..."
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl restart libvirtd.service || recho "libvirtd启动失败"
systemctl restart openstack-nova-compute.service || recho "openstack-nova-compute启动失败"
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
recho "5秒后重启系统..."
for i in `seq 5 -1 1` ; do
tput sc
echo -n $i
sleep 1
tput rc
tput ed
done
reboot
OpenStack的存储组件—Cinder和Swift—让你在你的私有云里构建块存储和对象的存储系统,Openstack从Folsom开始使用Cinder替换原来的Nova-Volume服务,为Openstack云平台提供块存储服务,Cinder接口提供了一些标准功能,允许创建和附加块设备到虚拟机,如“创建卷”,“删除卷”和“附加卷”。还有更多高级的功能,支持扩展容量的能力,快照和创建虚拟机镜像克隆,主要涉及到的组件如下:
cinder-api:接受API请求,并将其路由到“cinder-volume“执行,即请求cinder要先请求此对外API。
cinder-volume:与块存储服务和例如“cinder-scheduler“的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。“cinder-volume“服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。
cinder-scheduler守护进程:选择最优存储提供节点来创建卷。其与“nova-scheduler“组件类似。
cinder-backup守护进程:“cinder-backup“服务提供任何种类备份卷到一个备份存储提供者。就像“cinder-volume“服务,它与多种存储提供者在驱动架构下进行交互。
消息队列:在块存储的进程之间路由信息。
监听端口:8776
- 数据库端
新建数据库
mysql -u root -p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '123';
flush privileges;
- 控制端
source admin-openrc
openstack user create --domain default --password-prompt cinder
密码我设置为:123
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne \
volumev2 public http://openstackvip.com:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://openstackvip.com:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://openstackvip.com:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 public http://openstackvip.com:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 internal http://openstackvip.com:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne \
volumev3 admin http://openstackvip.com:8776/v3/%\(project_id\)s
yum -y install openstack-cinder
sed -i -e '/^#/d' -e '/^$/d' /etc/cinder/cinder.conf
vim /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.99.101
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
su -s /bin/sh -c "cinder-manage db sync" cinder
vim /etc/nova/nova.conf
在对应选项加上这个配置
[cinder]
os_region_name = RegionOne
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
- 在ha上配置
vim /etc/haproxy/haproxy.cfg
在最后追加
listen cinder
bind :8776
mode tcp
server t1 192.168.99.101:8776 check inter 3s fall 3 rise 5
systemctl restart haproxy
准备一台存储服务器,也可以在数据库服务器上来配(省机器,分配2G内存),称之为“块存储”节点
- "块存储"节点(我在数据库节点上做)
echo "- - -" > /sys/class/scsi_host/host0/scan
yum -y install lvm2 device-mapper-persistent-data
systemctl enable lvm2-lvmetad.service
systemctl restart lvm2-lvmetad.service
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
sed -i -e '/#/d' -e '/^$/d' /etc/lvm/lvm.conf
vim /etc/lvm/lvm.conf
找个下面这个字段修改
devices {
...
filter = [ "a/sdb/", "r/.*/"]
a是access,r是reject,只授受sdb磁盘,
yum -y install openstack-cinder targetcli python-keystone
sed -i -e '/#/d' -e '/^$/d' /etc/cinder/cinder.conf
vim /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 192.168.99.106
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
enabled_backends = lvm
glance_api_servers = http://openstackvip.com:9292
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:[email protected]/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://openstackvip.com:5000
auth_url = http://openstackvip.com:5000
memcached_servers = openstackvip.com:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[privsep]
[profiler]
[sample_castellan_source]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
volume_backend_name = openstack-lvm
my_ip写你的本机的IP
systemctl enable openstack-cinder-volume.service target.service
systemctl restart openstack-cinder-volume.service target.service
- 控制节点上
openstack volume service list
将逻辑卷格式化为ext4
mkfs.ext4 /dev/vdb
mount /dev/vdb /data
扩展卷
resize2fs /vev/vdb
当做硬盘使用:cinder存储服务器只能同时提供一种存储方式lvm或者nfs,不能同时提供两种
准备一台新的虚拟机当nfs服务器,IP为192.168.99.105
- 在新的NFS服务器上
yum -y install nfs-utils rpcbind
useradd nfsuser
id nfsuser
vim /etc/exports
/nfsdata *(rw,all_squash,anonuid=1000,anongid=1000)
给权限
mkdir /nfsdata
chown nfsuser.nfsuser /nfsdata
anonuid和anongid写的id是nfsuser的id
systemctl enable nfs
systemctl restart nfs
- 在控制节点上
vim /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = nfs
[nfs]
volume_backend_name = openstack-nfs
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares.conf
nfs_mount_point_base = $state_path/mnt
volume_backend_name,定义名称,后面做关联的时候使用
volume_driver,驱动
nfs_shares_config ,定义 NFS 挂载的配置文件路径
nfs_mount_point_base,定义 NFS 挂载点
vim /etc/cinder/nfs_shares.conf
文件不存在,需要创建
192.168.99.105:/nfsdata
chown root.cinder /etc/cinder/nfs_shares.conf
systemctl restart openstack-cinder-volume.service
cinder service-list
cinder type-create lvm
cinder type-create nfs
cinder type-key nfs set volume_backend_name=openstack-nfs
cinder type-key lvm set volume_backend_name=Openstack-lvm
- "块存储"节点
yum -y install openstack-cinder
vim /etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
openstack catalog show object-store
systemctl enable openstack-cinder-backup.service
systemctl start openstack-cinder-backup.service
专有网络VPC(Virtual Private Cloud)是一个相互隔离的网络环境,每个专有网络之间逻辑上彻底隔离,可以自己选择自己的IP地址范围、划分网段、配置路由表和网关等,从而实现安全而轻松的资源访问和应用程序访问
在配置完provider网络的基础上配置,通过自服务网络提供的路由进行连接provider外网
- 控制端上
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
vim /etc/neutron/neutron.conf
在原来的基础上修改或添加
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:[email protected]
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
仅修改DEFAULT
vim /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = true
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.99.101
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
#之前已经有了,不用重复添加
echo "net.bridge.bridge-nf-call-iptables = 1" > /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1 " >> /etc/sysctl.conf
生效
sysctl -p
vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl restart neutron-l3-agent.service
- 计算节点上
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 192.168.99.23
l2_population = true
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
systemctl restart openstack-nova-compute.service
systemctl restart neutron-linuxbridge-agent.service
- 控制端
openstack network agent list
- 控制端
openstack network create selfnetwork
验证
openstack network list
openstack subnet create --network selfnetwork --dns-nameserver 8.8.8.8 --gateway 172.16.0.1 --subnet-range 172.16.0.0/16 selfnetwork-subnet
命令格式:
openstack subnet create --network 网络名称 \
--dns-nameserver 8.8.8.8 --gateway 172.16.1.1 \
--subnet-range 172.16.1.0/24 自定义子网名称
openstack router create router
neutron router-interface-add router selfnetwork-subnet
neutron router-gateway-set router provider
vim /etc/openstack-dashboard/local_settings
修改这段
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': True,
'enable_ha_router': True,
'enable_lb': True,
'enable_firewall': True,
'enable_': True,
'enable_fip_topology_check': True,
...
openstack router show router | grep status
neutron router-port-list router
验证能否ping得通外网
实现类似于阿里云ECS主机的内外网(双网卡不通网段)的结构,最终实现内外网区分隔离
- 控制节点上
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
修改这个选项块
[linux_bridge]
physical_interface_mappings = provider:eth0, external:eth1
vim /etc/neutron/plugins/ml2/ml2_conf.ini
修改这个选项块
[ml2_type_flat]
flat_networks = provider, external
systemctl restart neutron-linuxbridge-agent
systemctl restart neutron-server
- 计算节点上
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
修改这个选项块
[linux_bridge]
physical_interface_mappings = provider:eth0, external:eth1
systemctl restart neutron-linuxbridge-agent
- 控制节点上
neutron net-create --shared --provider:physical_network external --provider:network_type flat external-net
neutron subnet-create --name external-subnet \
--allocation-pool start=172.16.23.200,end=172.16.23.220 \
--dns-nameserver 114.114.114.114 external-net 172.16.0.0/16
neutron net-list
通过KVM安装虚拟机Centos 7.2 、centos 6.9 和Windwos 2008 R2_x86_64 操作系统步骤,还有基于官方GenericCloud 7.2.1511镜像制作。并将磁盘文件作为镜像上传到openstack glance,作为批量创建虚拟机的镜像文件,其中windowsn 2008安装virtio 半虚拟化驱动,以实现网络IO和磁盘IO的半虚拟化提升速度。
Centos 7默认即支持半虚拟化,不需要安装驱动,Virtio最初由澳大利亚的一个天才级程序员Rusty Russell编写,是一个在hypervisor之上的抽象API接口,让客户机知道自己运行在虚拟化环境中,从而与hypervisor根据 virtio 标准协作,从而在客户机中达到更好的性能(特别是I/O性能),目前,有不少虚拟机都采用了virtio半虚拟化驱动来提高性能。
ISO
ISO格式是使用通常用于CD和DVD的只读ISO 9660文件系统格式化的磁盘镜像。
OVF
OVF(开放虚拟化格式)是虚拟机的打包格式,由分布式管理任务组(DMTF)标准组定义。OVF包中包含一个或多个镜像文件,一个.ovf
XML元数据文件,其中包含有关虚拟机的信息,也可能包含其他文件。
QCOW2
QCOW2格式通常与KVM管理程序一起使用。它具有原始格式的一些附加功能,例如:
因为qcow2是稀疏的,所以qcow2镜像通常小于原始镜像。较小的镜像意味着上传速度更快,因此将原始镜像转换为qcow2进行上传通常会更快,而不是直接上传原始文件。
Raw
原始镜像格式是最简单的格式,KVM和Xen虚拟机管理程序本身都支持这种格式。
VDI
VirtualBox 对镜像文件使用 VDI(虚拟磁盘镜像)格式。OpenStack Compute虚拟机管理程序都不直接支持VDI,因此您需要将这些文件转换为其他格式才能与OpenStack一起使用。
VHD
Microsoft Hyper-V使用VHD(虚拟硬盘)格式的镜像。
VHDX
Microsoft Server 2012附带的Hyper-V版本使用较新的 VHDX格式,该格式与VHD相比具有一些附加功能,例如支持更大的磁盘大小和防止电源故障期间的数据损坏。
VMDK
VMware ESXi 虚拟机管理程序将 VMDK(虚拟机磁盘)格式用于镜像。
- 镜像节点上
镜像节点就是新的虚拟机
yum install bridge-utils –y
vim /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Bridge
BOOTPROTO=static
NAME=eth0
DEVICE=eth0
ONBOOT=yes
BRIDGE=br0
vim /etc/sysconfig/network-scripts/ifcfg-br0
TYPE=Bridge
BOOTPROTO=static
NAME=br0
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.99.50
NETMASK=255.255.255.0
GATEWAY=192.168.99.2
DNS1=114.114.114.114
vim /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Bridge
BOOTPROTO=static
NAME=eth1
DEVICE=eth1
ONBOOT=yes
BRIDGE=br0
vim /etc/sysconfig/network-scripts/ifcfg-br1
TYPE=Bridge
BOOTPROTO=static
NAME=br1
DEVICE=br1
ONBOOT=yes
IPADDR=172.16.23.200
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
DNS1=114.114.114.114
systemctl restart network
ping www.baidu.com
yum groupinstall "GNOME Desktop" –y
reboot
做镜像就是在宿主机最小化安装系统并配置优化,做完配置之后将虚拟机关机,然后将虚拟机磁盘文件上传至glance即可启动虚拟机
- 镜像节点上
yum install -y qemu-kvm qemu-kvm-tools libvirt virt-manager virt-install
qemu-img create -f qcow2 /var/lib/libvirt/images/CentOS7.qcow2 10G
file /var/lib/libvirt/images/CentOS7.qcow2
/data/CentOS-7-x86_64-Minimal-1511.iso
virt-install --virt-type kvm \
--name CentOS7_1 \
--ram 1024 \
--cdrom=/data/CentOS-7-x86_64-Minimal-1511.iso \
--disk path=/var/lib/libvirt/images/CentOS7.qcow2 \
--network bridge=br0 \
--graphics vnc,listen=0.0.0.0 \
--noautoconsole
ss -tnl | grep 5900
virt-manager
- 刚刚创建的kvm虚拟机上
ping www.baidu.com
yum install wget –y
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install -y net-tools vim lrzsz tree screen lsof ntpdate telnet acpid
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
- 控制节点上
ssh-keygen
ssh-copy-id localhost
cat /root/.ssh/authorized_keys
scp ~/.ssh/* 172.16.134.104:/root/.ssh/
172.16.134.104 是刚刚创建的kvm虚拟机的ip
如果提示scp: /root/.ssh/: No such file or directory
则需要在kvm虚拟机上mkdir /root/.ssh
ssh 172.16.134.104
- 在镜像节点上
关机然后复制镜像至控制端
cd /var/lib/libvirt/images/
scp CentOS7_2.qcow2 192.168.99.101:/data/
- 在控制节点上
source admin.keystone
openstack image create "CentOS-7-template" \
--file /data/CentOS7_2.qcow2 \
--disk-format qcow2 \
--container-format bare \
--public
openstack image list
openstack flavor create --vcpus 1 \
--ram 1024 --disk 20 centos-1C-2G-20G
openstack flavor list
openstack server create --flavor centos-1C-2G-20G \
--image CentOS-7-template \
--nic net-id=a57d2907-a59d-4422-b231-8d3c788d10d3 \
--security-group 271e3299-3a32-4b57-9afa-0d13cc8673a1 \
--key-name mykey \
centos-tmp-1
net-id 使用
openstack network list
查
security-group 使用openstack security group list
查
key-name 使用openstack keypair list
查
ping www.baidu.com
浮动IP用于关联到每个实例,实现浮动IP与自服务IP的一一对应,且可以通过浮动IP映射打通从外网访问虚拟机的目的。
[controller11]$ ssh 192.168.99.203
The authenticity of host '192.168.99.203 (192.168.99.203)' can't be established.
ECDSA key fingerprint is SHA256:IPL9jmUK5tk53Yg7Auqduizch2Yak+uD+OiBsXLSqTw.
ECDSA key fingerprint is MD5:2a:6e:e3:e2:39:7e:18:7d:05:79:ed:bd:25:46:33:17.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.203' (ECDSA) to the list of known hosts.
Last login: Sun Sep 1 02:31:14 2019
[root@localhost ~]#
yum -y install httpd
echo testtest > /var/www/html/index.html
systemctl restart httpd
systemctl enable httpd
neutron quota-show admin
- 控制端
vim /etc/neutron/neutron.conf
[quotas]
quota_network = 10
quota_subnet = 10
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 10
quota_floatingip = 1000
quota_security_group = 10
quota_security_group_rule = 100
systemctl restart openstack-nova-api.service neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
- 计算节点
vim /etc/neutron/neutron.conf
[quotas]
quota_network = 10
quota_subnet = 10
quota_port = 5000
quota_driver = neutron.db.quota.driver.DbQuotaDriver
quota_router = 10
quota_floatingip = 50
quota_security_group = 10
quota_security_group_rule = 100
systemctl restart neutron-linuxbridge-agent
openstack port list | grep 172.16.23.220
mysql -uneutron -hopenstackvip.com –p
我把ip从172.16.23.220改成172.16.23.222
USE neutron;
SELECT * FROM ports WHERE device_id="937a9a25-4a7c-4847-898f-2db8828ecde5";
SELECT * FROM ipallocations WHERE port_id="937a9a25-4a7c-4847-898f-2db8828ecde5";
UPDATE ipallocations SET ip_address="172.16.23.222" WHERE port_id="937a9a25-4a7c-4847-898f-2db8828ecde5";
在有些时候,创建完成的虚拟机因业务需求需要变更内存或 CPU 或磁盘,因此需要配置允许后期类型调整。(调整实例大小与虚拟机迁移本质相同)实际是先拷贝数据到新的节点,再在该节点上创建新的虚拟机,然后将旧的虚拟机删除。
- 控制节点
openstack flavor create --id 1 --vcpus 1 --ram 512 --disk 10 newone
vim /etc/nova/nova.conf
在DEFAULT选项添加这项
[DEFAULT]
allow_resize_to_same_host=true
baremetal_enabled_filters=RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter
bash nova-restart.sh
目前有2个计算节点:
ip1:192.168.99.22
ip2:192.168.99.23
- 计算节点1
usermod nova -s /bin/bash
echo nova123 | passwd --stdin nova
- 计算节点2
usermod nova -s /bin/bash
echo nova123 | passwd --stdin nova
- 计算节点1
su nova
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa > /dev/null 2>&1
ssh-copy-id -i ~/.ssh/id_dsa.pub [email protected]
ssh [email protected]
#登录之后,看下ip是否为99.23
- 计算节点2
su nova
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa > /dev/null 2>&1
ssh-copy-id -i ~/.ssh/id_dsa.pub [email protected]
ssh [email protected]
#登录之后,看下ip是否为99.22
主机聚合命令:
创建一个主机聚合名称,同时创建一个可用域az1(可选)
nova aggregate-create 主机聚合名 [可用域名]
查看主机聚合与可用域
nova aggregate-list
把主机添加到对应的主机聚合中
nova aggregate-add-host 主机聚合名 计算节点名
查询主机与服务所属的availablitiy zone
nova service-list
查看可用域状态
nova availability-zone-list
方法1
命令行
nova aggregate-create agg1
nova aggregate-add-host agg1 node1
nova aggregate-add-host agg1 node2
nova boot --image centos --flavor test \
--availability-zone t300:node1 \
--nic net-name=provider,v4-fixed-ip=192.168.6.130 \
vmt300
–image 指定镜像名
–flavor 指定类型
–availability-zone 指定主机聚合名:计算节点
-nic 指定网络名与ip
最后的vmt300,是虚拟机名
如果要指定2个网络的ip,再加一行 --nic选项即可
控制端和计算节点的/etc/nova/nova.conf进行以下配置:
配置虚拟机自启动:
resume_guests_state_on_host_boot=true
配置CPU 超限使用:
默认为16,即允许开启16倍于物理CPU的虚拟CPU个数。
cpu_allocation_ratio=16
配置内存超限使用:
配置允许1.5倍于物理内存的虚拟内存
ram_allocation_ratio=1.5
配置硬盘超限使用:
磁盘尽量不要超限,可能会导致数据出现丢失
disk_allocation_ratio=1.0
配置保留磁盘空间:
即会预留一部分磁盘空间给系统使用
reserved_host_disk_mb=20480
配置预留内存给系统使用:
预留一定的内存给系统使用
reserved_host_memory_mb=4096
a
```bash
bash nova-restart.sh
目前有2个计算节点:
ip1:192.168.99.22
ip2:192.168.99.23
- 计算节点1
usermod nova -s /bin/bash
echo nova123 | passwd --stdin nova
- 计算节点2
usermod nova -s /bin/bash
echo nova123 | passwd --stdin nova
- 计算节点1
su nova
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa > /dev/null 2>&1
ssh-copy-id -i ~/.ssh/id_dsa.pub [email protected]
ssh [email protected]
#登录之后,看下ip是否为99.23
- 计算节点2
su nova
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa > /dev/null 2>&1
ssh-copy-id -i ~/.ssh/id_dsa.pub [email protected]
ssh [email protected]
#登录之后,看下ip是否为99.22
主机聚合命令:
创建一个主机聚合名称,同时创建一个可用域az1(可选)
nova aggregate-create 主机聚合名 [可用域名]
查看主机聚合与可用域
nova aggregate-list
把主机添加到对应的主机聚合中
nova aggregate-add-host 主机聚合名 计算节点名
查询主机与服务所属的availablitiy zone
nova service-list
查看可用域状态
nova availability-zone-list
方法1
命令行
nova aggregate-create agg1
nova aggregate-add-host agg1 node1
nova aggregate-add-host agg1 node2
nova boot --image centos --flavor test \
--availability-zone t300:node1 \
--nic net-name=provider,v4-fixed-ip=192.168.6.130 \
vmt300
–image 指定镜像名
–flavor 指定类型
–availability-zone 指定主机聚合名:计算节点
-nic 指定网络名与ip
最后的vmt300,是虚拟机名
如果要指定2个网络的ip,再加一行 --nic选项即可
控制端和计算节点的/etc/nova/nova.conf进行以下配置:
配置虚拟机自启动:
resume_guests_state_on_host_boot=true
配置CPU 超限使用:
默认为16,即允许开启16倍于物理CPU的虚拟CPU个数。
cpu_allocation_ratio=16
配置内存超限使用:
配置允许1.5倍于物理内存的虚拟内存
ram_allocation_ratio=1.5
配置硬盘超限使用:
磁盘尽量不要超限,可能会导致数据出现丢失
disk_allocation_ratio=1.0
配置保留磁盘空间:
即会预留一部分磁盘空间给系统使用
reserved_host_disk_mb=20480
配置预留内存给系统使用:
预留一定的内存给系统使用
reserved_host_memory_mb=4096