云计算是通过虚拟化技术去实现的,他是一种按量付费的模式!
- 租其他机房特别贵
- 扩展特别繁琐
- IAAS 基础设施即服务
- PAAS 平台即服务【每一种语言就是一个平台】
- SAAS 软件即服务
- 虚拟机管理平台:每台虚拟机的管理都用数据库来统计
- openstack 实现的是云计算 IAAS,开源的云计算平台,apache 2.0 开源协议。
- openstack 使用的是 SOA 架构:
即拆业务,把每一个功能都拆成一个独立的 web 服务,每一个服务都拥有至少一个集群
官方文档:https://docs.openstack.org/zh_CN/
OpenStack从入门到放弃_老男孩it教育的技术博客_51CTO博客
节点名称 | IP 地址 | 作用 |
---|---|---|
OSACN | 192.168.80.181 | 控制虚拟机 |
OSACM | 192.168.80.182 | 搭载虚拟机 |
- OSACM(computer) 计算节点必须开启虚拟化
- OSACM(control) 控制节点内存必须大于 3G
挂载 /mnt 目录,并且设置开机自启
$ mount /dev/cdrom /mnt #ERROR# # 内容:mount: no medium found on /dev/sr0 # 解决办法: # 只要在虚拟机设置——硬件——CD/DVD——设备状态的“已连接”和“启动时连接“都勾选就可以了,两个空格都选上就可以了。 $ echo "mount /dev/cdrom /mnt" >> /etc/rc.local $ chmod +x /etc/rc.d/rc.local
上传 openstack_rpm.tar.gz 到 /opt 目录并解压
创建 local.repo 下载路径
$ vim /etc/yum.repos.d/local.repo ----------------------------------------- [local] name=local baseurl=file:///mnt gpgcheck=0 [openstack] name=openstack baseurl=file:///opt/repo gpgcheck=0 -----------------------------------------
在所有节点安装open stack 客户端和 openstack-selinux
yum install -y openstackclient openstack-selinux
在控制节点安装数据库相关组件
yum install -y mariadb mariadb-server python2-PyMySQL cat /etc/my.cnf.d/openstack.cnf ----------------------------------------- [mysqld] bind-address = 192.168.80.181 default-storage-engine=innodb innodb_file_per_table max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8 ----------------------------------------- $ systemctl start mariadb $ systemctl enable mariadb $ mysql_secure_installation $ 回车 $ y ... $ y
在控制节点上安装消息队列
$ yum install -y rabbitmq-server $ systemctl start rabbitmq-server.service $ systemctl enable rabbitmq-server.service $ rabbitmqctl add_user openstack RABBIT_PASS # 设置用户名和密码 Creating user "openstack" ... $ rabbitmqctl set_permissions openstack ".*" ".*" ".*" # 给用户设置可读可写权限 Setting permissions for user "openstack" in vhost "/" ... $ rabbitmq-plugins enable rabbitmq_management # 加载网页组件 ## 查看是否有 5762、15762、25762 端口
memcached 缓存 token
$ yum install -y memcached python-memcached $ sed -i "s#127.0.0.1#192.168.80.181#g" /etc/sysconfig/memcached $ systemctl start memcached.service $ systemctl enable memcached.service ## 查看是否有 11211 端口
功能
认证服务的搭建
创库授权
$ mysql -uroot -p1
$ create database keystone;
$ grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONE_DBPASS';
$ grant all privileges on keystone.* to 'keystone'@'%' identified by 'KEYSTONE_DBPASS';
安装 keystone 相关安装包
$ yum install -y openstack-keystone httpd mod_wsgi
修改配置文件
$ cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak
$ grep -Ev '^$|#' /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
$ vim /etc/keystone/keystone.conf
-------------------------------------------
[DEFAULT]
admin_token = ADMIN_TOKEN
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
## 协议://数据库用户:密码@主机名/数据库的表名
[token]
provider = fernet
-------------------------------------------
## 也可以这样
$ yum install -y openstack-utils
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token ADMIN_TOKEN
初始化身份认证服务的数据库和Fernet keys
$ su -s /bin/sh -c "keystone-manage db_sync" keystone # 切换到keystone 用户下执行使用 /bin/sh 执行了引号内的命令
$ mysql -uroot -p1 keystone -e "show tables;" # 有表上步成功
$ keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
$ ls /etc/keystone # 有 fernet-keys 上步成功
配置 httpd
echo "ServerName ocman" >> /etc/httpd/conf/httpd.conf # 使 apache 启动更快
$ vim /etc/httpd/conf.d/wsgi-keystone.conf
-------------------------------------------
Listen 5000
Listen 35357
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
-------------------------------------------
## 查看是否有 5000、35357 端口
创建服务和注册 API
## 增加环境变量
$ export OS_TOKEN=ADMIN_TOKEN # 配置认证令牌
$ export OS_URL=http://osacn:35357/v3 # 配置端点 URL
$ export OS_IDENTITY_API_VERSION=3 # 配置认证 API 版本
## 创建认证服务
$ openstack service create \
--name keystone --description "OpenStack Identity" identity
## 创建三个角色
$ openstack endpoint create --region RegionOne \
identity public http://osacn:5000/v3
$ openstack endpoint create --region RegionOne \
identity internal http://osacn:5000/v3
$ openstack endpoint create --region RegionOne \
identity admin http://osacn:35357/v3
## 以上四步有表格即为成功
创建域(地区)、项目(团队)、用户、角色
$ openstack domain create --description "Default Domain" default # 创建域
$ openstack project create --domain default --description "Admin Project" admin # 创建项目
$ openstack user create --domain default --password ADMIN_PASS admin # 创建用户
$ openstack role create admin # 创建角色
$ openstack role add --project admin --user admin admin # 将项目、用户和角色关联起来
$ openstack project create --domain default --description "Service Project" service # 创建服务项目
验证是否成功
vim admin-openrc
--------------------------------------------
export OS_PROJECT_DOMAIN_NAME=default
export OS_USER_DOMAIN_NAME=default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://osacn:35357/v3
export OS_IDENTITY_API_VERSION=3
--------------------------------------------
vim /root/.bashrc
--------------------------------------------
source admin-openrc
--------------------------------------------
env | grep OS # 获取 OS开头的环境变量
unset OS_TOKEN # 删除环境变量
$ openstack token issue # 验证 keystone 是否成功
数据库创建、授权
$ mysql -uroot -p1
$ CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
在 keystone 创建 glance 用户关联角色
$ openstack user create --domain default --password GLANCE_PASS glance
$ openstack role add --project service --user glance admin
在 keystone 上创建服务,注册 API
$ openstack service create --name glance \
--description "OpenStack Image" image
$ openstack endpoint create --region RegionOne \
image public http://osacn:9292
$ openstack endpoint create --region RegionOne \
image internal http://osacn:9292
$ openstack endpoint create --region RegionOne \
image admin http://osacn:9292
下载安装 glance
yum install -y openstack-glance
配置相关组件并开启服务
$ cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak
$ grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
$ vim /etc/glance/glance-api.conf
--------------------------------------------
[DEFAULT]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@osacn/glance
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
--------------------------------------------
$ cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
$ grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
$ vim /etc/glance/glance-registry.conf
--------------------------------------------
[DEFAULT]
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@osacn/glance
[glance_store]
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]
[paste_deploy]
flavor = keystone
[profiler]
--------------------------------------------
$ su -s /bin/sh -c "glance-manage db_sync" glance
$ mysql -uroot -p1 glance -e "show tables" # 查看是否有表
$ systemctl enable openstack-glance-api.service openstack-glance-registry.service
$ systemctl start openstack-glance-api.service openstack-glance-registry.service
验证是否成功
## 查看 9191、9292 端口是否监听
## 上传镜像
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
$ ls /var/lib/glance/images # 查看是否有内容
$ openstack image list
+---------------+--------+--------+
| ID | Name | Status |
+----------------+--------+--------+
| e447460b-89a2 | cirros | active |
+----------------+--------+--------+
(一)节点概述
(二)控制节点的操作
数据库的创建、授权
$ mysql -u root -p1
$ CREATE DATABASE nova_api;
$ CREATE DATABASE nova;
$ GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
$ GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
$ GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
$ GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
在 keystone 创建 glance 用户关联角色
$ openstack user create --domain default \
--password NOVA_PASS nova
$ openstack role add --project service --user nova admin
创建服务实体和 API 端点
$ openstack service create --name nova \
--description "OpenStack Compute" compute
$ openstack endpoint create --region RegionOne \
compute public http://osacn:8774/v2.1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
compute internal http://osacn:8774/v2.1/%\(tenant_id\)s
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1/%\(tenant_id\)s
安装软件包
yum install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler -y
配置相关配置文件
$ cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
$ grep -Ev "^$|#" /etc/nova/nova.conf.bak > /etc/nova/nova.conf
$ vim /etc/nova/nova.conf
--------------------------------------------
[DEFAULT]
enabled_apis = osapi_compute,metadata
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.80.181
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@osacn/nova_api
[barbican]
[cache]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@osacn/nova
[ephemeral_storage_encryption]
[glance]
api_servers = http://osacn:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
[matchmaker_redis]
[metrics]
[neutron]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[workarounds]
[xenserver]
--------------------------------------------
同步数据库并开启服务
$ su -s /bin/sh -c "nova-manage api_db sync" nova
$ su -s /bin/sh -c "nova-manage db sync" nova
$ systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
$ systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
验证是否成功
nova service-list # 看到三个 up
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | osacn | internal | enabled | up | 2021-08-27T01:15:56.000000 | - |
| 2 | nova-scheduler | osacn | internal | enabled | up | 2021-08-27T01:15:57.000000 | - |
| 3 | nova-conductor | osacn | internal | enabled | up | 2021-08-27T01:15:59.000000 | - |
+----+------------------+-------+----------+---------+-------+----------------------------+-----------------+
## 监听 6080 端口
(三)计算节点
安装软件包
$ yum install openstack-nova-compute -y
配置相关配置文件
cat /etc/nova/nova.conf
--------------------------------------------
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.80.182
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[conductor]
[cors]
[cors.subdomain]
[database]
[ephemeral_storage_encryption]
[glance]
api_servers = http://osacn:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
[keymgr]
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
[libvirt]
virt_type = qemu # 注意 egrep -c '(vmx|svm)' /proc/cpuinfo 的值返回零时有这一行!
[matchmaker_redis]
[metrics]
[neutron]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[rdp]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.80.181:6080/vnc_auto.html
[workarounds]
[xenserver]
--------------------------------------------
启动服务
$ yum install -y qemu* # 解决 bug
$ systemctl enable libvirtd.service openstack-nova-compute.service
$ systemctl start libvirtd.service openstack-nova-compute.service
验证
## 在控制节点上操作
nova service-list # 看到四个 up
(一)基础知识
(二)控制节点配置
数据库的创建、授权
$ mysql -u root -p1
$ CREATE DATABASE neutron;
$ GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
$ GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
创建服务证书
$ openstack user create --domain default --password NEUTRON_PASS neutron
$ openstack role add --project service --user neutron admin
创建服务实体
$ openstack service create --name neutron \
--description "OpenStack Networking" network
创建网络服务 API
$ openstack endpoint create --region RegionOne \
network public http://oscan:9696
$ openstack endpoint create --region RegionOne \
network internal http://oscan:9696
$ openstack endpoint create --region RegionOne \
network admin http://oscan:9696
安装并配置服务组件
$ yum install openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
cat /etc/neutron/neutron.conf
--------------------------------------------
[DEFAULT]
core_plugin = ml2
service_plugins =
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[agent]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@osacn/neutron
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[matchmaker_redis]
[nova]
auth_url = http://osacn:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_policy]
[qos]
[quotas]
[ssl]
--------------------------------------------
$ cat /etc/neutron/plugins/ml2/ml2_conf.ini
--------------------------------------------
[DEFAULT]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True
--------------------------------------------
$ cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
--------------------------------------------
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eno33554984
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False
--------------------------------------------
$ cat /etc/neutron/dhcp_agent.ini
--------------------------------------------
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[AGENT]
--------------------------------------------
$ cat /etc/neutron/metadata_agent.ini
--------------------------------------------
[DEFAULT]
nova_metadata_ip = osacn
metadata_proxy_shared_secret = METADATA_SECRET
[AGENT]
--------------------------------------------
$ cat /etc/nova/nova.conf
--------------------------------------------
...
[neutron]
url = http://osacn:9696
auth_url = http://osacn:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
...
--------------------------------------------
$ ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
$ su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
启动并检验是否成功
$ systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
$ systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
$ neutron agent-list # 有三个启动即为成功
(三)计算节点
安装相关服务
$ yum install -y openstack-neutron-linuxbridge ebtables ipset
配置通用组件的配置文档
cat /etc/neutron/neutron.conf
--------------------------------------------
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
[agent]
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_policy]
[qos]
[quotas]
[ssl]
--------------------------------------------
cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
--------------------------------------------
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eno33554984
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False
--------------------------------------------
cat /etc/nova/nova.conf
--------------------------------------------
...
[neutron]
url = http://osacn:9696
auth_url = http://osacn:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
--------------------------------------------
启动服务并检测
$ systemctl restart openstack-nova-compute.service
$ systemctl enable neutron-linuxbridge-agent.service
$ systemctl start neutron-linuxbridge-agent.service
## 在控制节点执行以下命令
neutron agent-list # 在 agent-type 列中找到计算节点的主机名即可
下载相关应用
$ yum install openstack-dashboard -y
配置相关文档
这里直接向 /etc//openstack-dashboard/ 下导入 local_settings , 将 主机名修改正确(修改成主节点的主机名)
启动服务
$ systemctl restart httpd.service memcached.service
$ cat /etc/httpd/conf.d/openstack-dashboard.conf
--------------------------------------------
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIApplicationGroup %{GLOBAL} # 添加最后一行
--------------------------------------------
创建网络(网络名+子网)
$ neutron net-create --shared --provider:physical_network provider --provider:network_type flat codefun
## 参数解释
--share 允许所有项目使用虚拟网络
# provider 开头的是在配置文件中配置完的,网络名称是 codefun
$ neutron subnet-create --name provider \
--allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS \
--dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
provider PROVIDER_NETWORK_CIDR
## 参数解释
1. IP 地址的范围内不能有已使用的
2. DNS 解析可以使用 /etc/resole.conf 中的
3. 网关为主机网关
# 例如:
$ neutron subnet-create --name codebad \
--allocation-pool start=192.168.80.200,end=192.168.80.250 \
--dns-nameserver 114.114.114.114 --gateway 192.168.80.2 \
codefun 192.168.80.0/24
neutron subnet-create --name codebadny \
--allocation-pool start=172.16.0.200,end=172.16.0.250 \
--dns-nameserver 114.114.114.114 --gateway 1172.16.0.2 \
codefunny 172.16.0.0/24
创建云主机的硬件配置方案
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
创建密钥对
ssh-keygen -q -N "" -f ~/.ssh/id_rsa # 创建密钥对
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey # 将公共密钥上传至open stack
创建安全组规则
$ openstack security group rule create --proto icmp default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | 8e4db7e7-e29d-4705-86de-7770d60eefb4 |
| ip_protocol | icmp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | fbd20348-ffdd-46b6-bbc9-259d28b76609 |
| port_range | |
| remote_security_group | |
+-----------------------+--------------------------------------+
$ openstack security group rule create --proto tcp --dst-port 22 default
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| id | a6313f44-e85c-4dc5-a1d5-01d955151ea7 |
| ip_protocol | tcp |
| ip_range | 0.0.0.0/0 |
| parent_group_id | fbd20348-ffdd-46b6-bbc9-259d28b76609 |
| port_range | 22:22 |
| remote_security_group | |
+-----------------------+--------------------------------------+
启动一个实例
$ neutron net-list # 获取 net-id
+---------------------------+---------+----------------------------+
| id | name | subnets |
+---------------------------+---------+----------------------------+
| e694a207-3385-4024-a57a- | codefun | 48c64edf-b12d-445c- |
| dfc5ad5ec28d | | b4e4-0e08faa5edfd |
| | | 192.168.80.0/24 |
+---------------------------+---------+----------------------------+
$ openstack server create --flavor m1.nano --image cirros --nic net-id=e694a207-3385-4024-a57a-dfc5ad5ec28d --security-group default --key-name mykey codefun001
检测实例是否创建成功
$ nova list
在控制节点上关闭 glance 服务(两个),并取消开机自启
在控制节点上将 glance 服务的数据库导出,并推送至目标服务器
$ mysqldump -uroot -p1 -B glance > glance.sql
$ scp -rp glance.sql osacm:/root/
在目标服务器上安装数据库并开启、设置开机自启、数据库初始化,glance 服务(两个),并导入数据库
$ yum install mariadb mariadb-server python2-PyMySQL -y
$ systemctl start mariadb && systemctl enable mariadb
$ mysql_secure_installation
$ mysql < glance.sql
$ yum install -y openstack-glance
将控制节点上glance的配置文件传至目标服务器,然后在目标服务器上修改配置文件,启动 glance 服务(两个)、开机自启
$ scp /etc/glance/glance-registry.conf osacm:/etc/glance/glance-registry.conf
$ scp /etc/glance/glance-api.conf osacm:/etc/glance/glance-api.conf
$ sed -i 's#GLANCE_DBPASS@osacn#GLANCE_DBPASS@osacm#g' /etc/glance/glance-api.conf
$ sed -i 's#GLANCE_DBPASS@osacn#GLANCE_DBPASS@osacm#g' /etc/glance/glance-registry.conf
$ systemctl start openstack-glance-api.service openstack-glance-registry.service
$ systemctl enable openstack-glance-api.service openstack-glance-registry.service
$ netstat -lntp # 查看是否有 9191 9292
将控制节点的镜像上传至目标服务器,并在目标服务器上修改镜像的属主、属组
$ scp /var/lib/glance/images/* osacm:/var/lib/glance/images/
$ chown -R glance:glance /var/lib/glance/images/*
修改 glance1在 keystone 上注册的 api,修改所有与 openstack 有关的 nova.conf ,并重启相应的 nova 服务
$ openstack endpoint create --region RegionOne image public http://osacm:9292
$ openstack endpoint create --region RegionOne image internal http://osacm:9292
$ openstack endpoint create --region RegionOne image admin http://osacm:9292
$ sed -i "s#http://osacn:9292#http://osacm:9292#g" /etc/nova/nova.conf
$ systemctl restart openstack-nova-api.service
$ systemctl restart openstack-nova-compute.service
检验
## 能够上传镜像,并且能够创建实例即可!
(一)主要部件及其作用
cinder-api
:接收外部api
的请求cinder-volume
:提供存储空间cinder-schedule
:调度器,决定将要分配的空间由哪一个cinder-volume
提供cinder-backup
:备份存储(二)安装 cinder——计算节点
数据库的创建和授权
$ mysql -uroot -p1
$ CREATE DATABASE cinder;
$ GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'CINDER_DBPASS';
$ GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'CINDER_DBPASS';
创建服务证书
openstack user create --domain default --password CINDER_PASS cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder \
--description "OpenStack Block Storage" volume
openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne \
volume public http://osacn:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume internal http://osacn:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volume admin http://osacn:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 public http://osacn:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 internal http://osacn:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne \
volumev2 admin http://osacn:8776/v2/%\(tenant_id\)s
安装并配置配置文件
$ yum install openstack-cinder
$ cp /etc/cinder/cinder.conf{,.bak}
561 grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
$ vim /etc/cinder/cinder.conf
--------------------------------------------------
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@osacn/cinder
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
--------------------------------------------------
$ openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne
$ su -s /bin/sh -c "cinder-manage db sync" cinder
启动服务,并下载 lvm 检查
$ systemctl restart openstack-nova-api.service
$ systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
$ systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
$ yum install -y lvm2
$ systemctl start lvm2-lvmpolld.service
$ systemctl enable lvm2-lvmpolld.service
$ cinder service-list # 可以看到 up
(二)安装 cinder——存储节点
首先要先添加两块硬盘
## 添加硬盘步骤略
## 不用重启使虚拟机识别硬盘的方法:
echo "- - -" > /sys/class/scsi_host/host0/scan # 如果不行,接着使用:
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
...
安装 lvm2 并配置配置文件 ,并创建卷组
$ yum install lvm2
$ systemctl enable lvm2-lvmetad.service
$ systemctl start lvm2-lvmetad.service
$ pvcreate /dev/sdb /dev/sdc
$ vgcreate cinder-ssd /dev/sdb
$ vgcreate "cinder-sata" /dev/sdc
$ vim /etc/lvm/lvm.conf
--------------------------------------------------
130 filter = [ "a/sdb/","a/sdc/","r/.*/"]
--------------------------------------------------
安装 cinder 及其相关软件配置配置文件,并启动。
$ yum install openstack-cinder targetcli python-keystone-y
$ yum install openstack-cinder targetcli python-keystone -y
$ vim /etc/cinder/cinder.conf
$ cp /etc/cinder/cinder.conf{,.bak}
$ grep -Ev "^$|#" /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
$ vim /etc/cinder/cinder.conf
--------------------------------------------------
[DEFAULT]
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 192.168.80.182
enabled_backends = ssd,sata # 对应下面所创建的 []
glance_api_servers = http://osacm:9292
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEYMGR]
[cors]
[cors.subdomain]
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@osacn/cinder
[keystone_authtoken]
auth_uri = http://osacn:5000
auth_url = http://osacn:35357
memcached_servers = osacn:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
rabbit_host = osacn
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[ssd] # 对应 enabled_backends
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-ssd # 对应创建的卷组
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = ssd # 标记,以便于以后指哪打哪
[sata] # 对应 enabled_backends
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-sata # 对应创建的卷组
iscsi_protocol = iscsi
iscsi_helper = lioadm
volume_backend_name = sata
--------------------------------------------------
$ systemctl enable openstack-cinder-volume.service target.service
$ systemctl start openstack-cinder-volume.service target.service
验证是否成功
## 控制节点上
$ cinder service-list # 有三个 up
## 在 web 端,能够创建卷组!在 存储节点上能够 lvs 查看创建的卷组。
添加网卡(不能是桥接!!)
修改配置文件
## 控制节点:
$ vim /etc/neutron/plugins/ml2/ml2_conf.ini
--------------------------------------------------
...
[ml2_type_flat]
flat_networks = provider,net172_16
...
--------------------------------------------------
$ vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
--------------------------------------------------
...
[linux_bridge]
physical_interface_mappings = provider:eno33554984,net172_16:brqe694a207-33
...
--------------------------------------------------
systemctl restart neutron-server.service neutron-linuxbridge-agent.service
## 计算节点
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
--------------------------------------------------
...
[linux_bridge]
physical_interface_mappings = provider:eno33554984,net172_16:brqe694a207-33
...
--------------------------------------------------
$ systemctl restart neutron-linuxbridge-agent.service
创建网络
$ neutron net-create --shared --provider:physical_network net172_16 --provider:network_type flat codefunny
$ neutron subnet-create --name codebadny --allocation-pool start=172.16.0.200,end=172.16.0.250 --dns-nameserver 114.114.114.114 --gateway 172.16.0.2 codefunny 172.16.0.0/24
在主节点上下载 nfs-utils
,并修改配置文件 /etc/exports
,创建目录,重启 nfs
服务
## 配置文件修改如下
/data 192.168.80.0/24(rw,async,no_root_quash,no_all_squash)
存储节点上配置 cinder.conf ,创建文件重启cinder服务
$ vim /etc/cinder/cinder.conf
--------------------------------------------------
[DEFAULT]
...
enabled_backends = ssd,sata,nfs
...
[nfs]
volume_driver = cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config = /etc/cinder/nfs_shares
volume_backend_name = nfs
--------------------------------------------------
$ vim /etc/cinder/nfs_shares
--------------------------------------------------
192.168.80.181:/data
--------------------------------------------------
$ systemctl restart openstack-cinder-volume.service
检验
$ cinder service-list
安装相关软件,配置相关文件
$ yum install -y openstack-nova-compute
$ vim /etc/nova/nova.conf
--------------------------------------------------
...
[libvirt]
cpu_mode = none
virt_type = qemu
...
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://osacn:6080/vnc_auto.html
...
--------------------------------------------------
$ systemctl start libvirtd.service openstack-nova-compute.service
$ systemctl enable libvirtd.service openstack-nova-compute.service
$ nova service-list
实现互相迁移节点的免密钥登录
## 一台主机
$ usermod -s /bin/bash nova
$ su - nova
-bash-4.2$ cp /etc/skel/.bash* .
-bash-4.2$ logout
$ su - nova
nova@osacm ~]$ ssh-keygen -t rsa -q -N ''
[nova@osacm ~]$ cd .ssh/
[nova@osacm .ssh]$ cp -fa id_rsa.pub authorized_keys
[nova@osacm ~]$ scp -rp .ssh [email protected]:`pwd`
## 另一台主机
$ usermod -s /bin/bash nova
[root@osacn ~]# su - nova
-bash-4.2$ logout
$ chown -R nova:nova /var/lib/nova/.ssh/
编写配置文件,重启服务
## 计算节点
vi /etc/nova/nova.conf
--------------------------------------------------
...
[DEFAULT]
allow_resize_to_same_host = True
--------------------------------------------------
$ systemctl restart openstack-nova-compute.service
## 控制节点
vi /etc/nova/nova.conf
--------------------------------------------------
...
[DEFAULT]
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
...
--------------------------------------------------
systemctl restart openstack-nova-scheduler.service
在 web 上检测即可