OpenStack Stein版搭建详解

目录

.基础环境配置

1.1 节点硬件规划

1.2 节点网络规划

1.3 关闭防火墙

1.4 配置yum源

1.5 配置节点IP

1.6 配置主机名

1.7 配置主机名解析

1.8 配置NTP服务

2.安装基础软件包

2.1 安装OpenStack软件包

2.2 安装mariadb数据库

2.3 安装RabbitMQ消息队列

2.4 安装Memcached缓存数据库

2.5 安装Etcd服务

3 安装OpenStack服务

3.1 安装keystone服务

3.1.1 认证服务概述

3.1.2 安装和配置keystone

3.1.3 创建项目和用户

3.1.4 验证认证服务操作

3.1.5 创建 OpenStack 客户端环境脚本

3.2 安装Glance服务

3.2.1 镜像服务概述

3.2.2 创建glance数据库

3.2.3 安装和配置组件

3.2.4 验证操作

3.3 安装compute服务

3.3.1 计算服务概述

3.3.2 安装和配置控制节点

3.3.2.2 安装和配置组件

3.3.3 安装和配置计算节点

3.3.4 验证计算服务操作

3.4 安装neutron服务

3.4.1 网络服务概述

3.4.3 安装和配置controller节点

3.4.4 安装和配置compute节点

3.5 安装Horizon服务

3.5.1 安装和配置组件

3.5.2 完成安装启动服务

3.5.3 登录web验证配置

3.6 安装Cinder服务

3.6.1 块存储服务概述

3.6.2 安装和配置cinder节点

3.6.3 安装和配置controller节点

3.6.4 验证cinder配置

4 创建虚拟机实例

4.1 创建外部网络

4.1.1 创建provider外部网络

4.1.2 网络中创建子网

4.1.3 查看节点网卡变化

4.2 创建租户网络

4.2.1 创建self-service网络

4.2.2 网络中创建子网

4.2.3 创建路由器

4.2.4 租户网络添加到路由器

4.2.5 路由器连接到外部网络

4.2.6 验证操作

4.3 创建实例类型

4.4 生成秘钥对

4.5 添加安全组规则

4.6 确认实例选项

4.7 创建实例

4.8 虚拟控制台访问实例

4.9 为实例分配浮动IP地址

4.10 远程SSH访问实例

4.11 网卡变化

4.12 块存储

4.12.1 创建一个卷

4.12.2 将卷添加到实例

5 使用官方云镜像创建实例

5.1 下载官方通用云镜像

5.3 创建实例

5.4 为实例分配浮动IP地址

5.5 远程SSH访问实例

6 查看当前网卡状态

6.1 控制节点

6.2 计算节点

6.3 存储节点


.基础环境配置

1.1 节点硬件规划

本次搭建使用VMware Workstation虚拟出3台CentOS7虚拟机作为主机节点,节点架构:1个controller控制节点、1个compute计算节点、1个cinder块存储节点。硬件配置具体如下:

节点名称

CPU 内存 磁盘

 

操作系统镜像

controller节点 4VCPU 6GB 50GB CentOS-7-x86_64-Minimal-1804.iso
compute节点 4VCPU 4GB 50GB CentOS-7-x86_64-Minimal-1804.iso
cinder节点 2VCPU 2GB 50GB系统盘,50GB存储盘 CentOS-7-x86_64-Minimal-1804.iso

 Vmware Workstation虚拟机开启虚拟化引擎:

OpenStack Stein版搭建详解_第1张图片

查看操作系统及内核版本:

[root@controller ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)
[root@controller ~]# uname -sr
Linux 4.16.11-1.el7.elrepo.x86_64

1.2 节点网络规划

本次搭建网络使用linuxbridge+vxlan模式,包含三个网络平面:管理网络,外部网络和租户隧道网络,具体规划如下:

节点名称 网卡编号 网卡名称 网卡模式 虚拟交换机 网络类型 IP地址 网关
controller节点 网卡1 ens33 仅主机模式 vmnet1 管理网络 192.168.90.70
  网卡2 ens34 仅主机模式 vmnet2 隧道网络 192.168.91.70
  网卡3 ens35 NAT模式 vmnet8 外部网络 192.168.92.70 192.168.92.2/24
compute节点 网卡1 ens32 仅主机模式 vmnet1 管理网络 192.168.90.71
  网卡2 ens33 仅主机模式 vmnet2 隧道网络 192.168.91.71
  网卡3 ens34 NAT模式 vmnet8 部署网络 192.168.92.71 192.168.92.2/24
cinder节点 网卡1 ens33 仅主机模式 vmnet1 管理网络 192.168.90.72
  网卡2 ens35 NAT模式 vmnet8 部署网络 192.168.92.72 192.168.92.2/24

VMware Workstation虚拟网络编辑器配置信息,这里新创建VMnet2作为隧道网络

OpenStack Stein版搭建详解_第2张图片

网络规划说明:

  • 控制节点3块网卡,计算节点3块网卡,存储节点2块网卡。特别注意,计算节点和存储节点的最后一块网卡仅用于连接互联网部署Oenstack软件包,如果搭建有本地yum源,这两块网卡是不需要的,不属于openstack架构体系中的网络。
  • 管理网络配置为仅主机模式,官方解释通过管理网络访问互联网安装软件包,如果搭建的有内部yum源,管理网络是不需要访问互联网的,配置成hostonly模式也可以。
  • 隧道网络配置为仅主机模式,因为隧道网络不需要访问互联网,仅用来承载openstack内部租户的网络流量。
  • 外部网络配置为NAT模式,控制节点的外部网络主要是实现openstack租户网络对外网的访问,另外openstack软件包的部署安装也走这个网络,
  • 特别注意:计算节点和存储节点的外部网络仅用来部署openstack软件包,没有其他用途。

三种网络平面说明:

  • 管理网络(management/API网络):
    提供系统管理相关功能,用于节点之间各服务组件内部通信以及对数据库服务的访问,所有节点都需要连接到管理网络,这里管理网络也承载了API网络的流量,将API网络和管理网络合并,OpenStack各组件通过API网络向用户暴露API服务。
  • 隧道网络(tunnel网络或self-service网络):
    提供租户虚拟网络的承载网络(VXLAN or GRE)。openstack里面使用gre或者vxlan模式,需要有隧道网络;隧道网络采用了点到点通信协议代替了交换连接,在openstack里,这个tunnel就是虚拟机走网络数据流量用的。这个网络所承载的网络和官方文档Networking Option 2: Self-service networks相对应。

  • 外部网络(external网络或者provider网络):
    openstack网络至少要包括一个外部网络,这个网络能够访问OpenStack安装环境之外的网络,并且非openstack环境中的设备能够访问openstack外部网络的某个IP。另外外部网络为OpenStack环境中的虚拟机提供浮动IP,实现openstack外部网络对内部虚拟机实例的访问。这个网络和官方文档Networking Option 1: Provider networks相对应。

  • 注意,这里没有规划存储平面网络,cinder存储节点使用管理网络承载存储网络数据。

本次搭建Openstack网络结构图:
这里写图片描述
本次搭建环境整体网络图:
这里写图片描述
搭建完成后的内部网络图:
这里写图片描述

这里写图片描述

1.3 关闭防火墙

1.关闭selinux

# sed -i 's/enforcing/disabled/g' /etc/selinux/config
# setenforce 0

2.关闭firewalld防火墙

# systemctl stop firewalld.service && systemctl disable firewalld.service
# firewall-cmd --state  #查看是否关闭

1.4 配置yum源

以下操作在所有节点执行

配置国内阿里云yum源以获取更快的下载速度:

1.备份CentOS官方源:

# cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

2.下载阿里云yum源:

# yum -y install net-tools vim wget
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
# yum clean all && yum makecache

1.5 配置节点IP

1.控制节点网络配置:

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=c5b6a4aa-c495-47c5-8e9d-f0fba43a6a89
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.90.70
PREFIX=24

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
#UUID=adfbb0cd-97ad-49a1-876b-75f9669ec512
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.91.70
PREFIX=24

[root@controller ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens35
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens35
#UUID=7f290d18-1276-4696-8727-f828ea44bdce
DEVICE=ens35
ONBOOT=yes
IPADDR=192.168.92.70
PREFIX=24
GATEWAY=192.168.92.2
DNS1=114.114.114.114
DNS2=8.8.8.8

[root@controller ~]# ip ad
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f2:f9:47 brd ff:ff:ff:ff:ff:ff
    inet 192.168.90.70/24 brd 192.168.90.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::fc0d:675:2ad6:f897/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens34:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f2:f9:51 brd ff:ff:ff:ff:ff:ff
    inet 192.168.91.70/24 brd 192.168.91.255 scope global noprefixroute ens34
       valid_lft forever preferred_lft forever
    inet6 fe80::1ddb:eab2:f3d4:2273/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: ens35:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f2:f9:5b brd ff:ff:ff:ff:ff:ff
    inet 192.168.92.70/24 brd 192.168.92.255 scope global noprefixroute ens35
       valid_lft forever preferred_lft forever
    inet6 fe80::a786:42fa:f068:a716/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

 2.计算节点网络配置:

[root@computer ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens32
#UUID=751a09ed-2f6b-4305-acea-4757a0cca0bb
DEVICE=ens32
ONBOOT=yes
IPADDR=192.168.90.71
PREFIX=24
[root@computer ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=2aa73c02-ff03-455c-b7ae-558f73b23b40
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.91.71
PREFIX=24

[root@computer ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
#UUID=646fb487-521a-4697-b6b0-28feb60113aa
DEVICE=ens34
ONBOOT=yes
IPADDR=192.168.92.71
PREFIX=24
GATEWAY=192.168.92.2
DNS1=114.114.114.114
DNS2=8.8.8.8

[root@computer ~]# ip add
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:4f:55:b3 brd ff:ff:ff:ff:ff:ff
    inet 192.168.90.71/24 brd 192.168.90.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::64b7:7c1d:5771:f22a/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:4f:55:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.91.71/24 brd 192.168.91.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::d7ce:c82b:4a3e:61ba/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: ens34:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:4f:55:c7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.92.71/24 brd 192.168.92.255 scope global noprefixroute ens34
       valid_lft forever preferred_lft forever
    inet6 fe80::6c69:4429:bec2:ad41/64 scope link noprefixroute
       valid_lft forever preferred_lft forever



3.存储节点网络配置:

[root@cinder ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=40e1b794-fa1b-48c4-bf85-25ced5c1578e
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.90.72
PREFIX=24
[root@cinder ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens34
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens34
UUID=0a1f319c-08d6-49d1-addb-c3a6b07d5ee2
DEVICE=ens34
ONBOOT=no
[root@cinder ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens35
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens35
#UUID=2b8d636a-bbef-4389-8366-ba0c8163da43
DEVICE=ens35
ONBOOT=yes
IPADDR=192.168.92.72
PREFIX=24
GATEWAY=192.168.92.2
DNS1=114.114.114.114
DNS2=8.8.8.8
[root@cinder ~]# ip add
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:55:28:a5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.90.72/24 brd 192.168.90.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::c155:716:1592:c9c3/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: ens35:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:55:28:b9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.92.72/24 brd 192.168.92.255 scope global noprefixroute ens35
       valid_lft forever preferred_lft forever
    inet6 fe80::1a5c:9306:6e3f:89b2/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

4.验证网络是否正常:

[root@controller ~]# ping qq.com  #控制节点访问互联网
[root@compute1 ~]# ping qq.com    #计算节点访问互联网
[root@cinder1 ~]# ping qq.com     #存储节点访问互联网
[root@controller ~]# ping  192.168.90.71  #访问计算节点管理网络
[root@controller ~]# ping  192.168.91.71  #访问计算节点隧道网络
[root@controller ~]# ping  192.168.92.71  #访问计算节点外部网络

1.6 配置主机名

控制节点执行:

[root@localhost ~]# hostnamectl set-hostname controller

计算节点执行:

[root@localhost ~]# hostnamectl set-hostname compute

存储节点执行:

[root@localhost ~]# hostnamectl set-hostname cinder

1.7 配置主机名解析

所有节点执行,配置相同,注意这里使用管理网络IP地址:

[root@controller ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.90.70 controller
192.168.90.71 computer
192.168.90.72 cinder

2.验证主机名解析是否正常:

ping计算节点:

[root@controller ~]# ping computer
PING computer (192.168.90.71) 56(84) bytes of data.
64 bytes from computer (192.168.90.71): icmp_seq=1 ttl=64 time=0.853 ms
[root@controller ~]# ping cinder
PING cinder (192.168.90.72) 56(84) bytes of data.
64 bytes from cinder (192.168.90.72): icmp_seq=1 ttl=64 time=0.688 ms

1.8 配置NTP服务

以下在控制节点进行配置

1.安装软件包:

[root@controller ~]# yum install chrony

2.修改配置文件:

[root@controller ~]# vim /etc/chrony.conf
allow 192.168.0.0/16  去掉注释,允许其他节点网段同步时间,请配置为对应网段

3.重启服务并加入开机启动项:

[root@controller ~]# systemctl enable chronyd.service && systemctl start chronyd.service

4查看时间同步状态:
MS列中包含^*的行,指明NTP服务当前同步的服务器。

MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- tock.ntp.infomaniak.ch        1   8   377   661   +130us[ -104us] +/-  104ms
^- ntp5.flashdance.cx            2   8   317   429  +1054us[+1073us] +/-  159ms
^- ntp1.flashdance.cx            2   8    57   127  +2422us[+2437us] +/-  144ms
^* 111.230.189.174               2   9   377    78   +170us[ +184us] +/-   35ms

5.查看当前时间是否准确,其中NTP synchronized: yes说明同步成功

[root@controller ~]# timedatectl
      Local time: Sat 2020-11-28 18:13:46 CST
  Universal time: Sat 2020-11-28 10:13:46 UTC
        RTC time: Sat 2020-11-28 11:22:19
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

以下在计算节点进行配置:

1.安装软件包:

[root@compute ~]# yum install chrony -y

2.修改配置文件,使计算节点与控制节点同步时间:

[root@compute ~]# vim /etc/chrony.conf   #注释3-6行,并增加第7行内容
  1 # Use public servers from the pool.ntp.org project.
  2 # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  3 #server 0.centos.pool.ntp.org iburst
  4 #server 1.centos.pool.ntp.org iburst
  5 #server 2.centos.pool.ntp.org iburst
  6 #server 3.centos.pool.ntp.org iburst
  7 server 192.168.92.70 iburst

3.重启服务并设置开机启动

[root@compute ~]# systemctl enable chronyd.service && systemctl start chronyd.service

存储节点配置同计算节点,这里省略

NTP的其他操作命令:

# timedatectl set-ntp yes                 #启用ntp同步服务
# timedatectl set-timezone Asia/Shanghai  #设置时区
# yum install –y ntpdate                  #安装时间同步工具
# ntpdate 0.centos.pool.ntp.org           #强制与网络NTP服务器同步时间
# ntpdate 192.168.92.70                   #强制与控制节点同步时间

注意:各个节点时间不同步后续可能出现各种问题,建议配置准确在进行后续操作。

2.安装基础软件包

2.1 安装OpenStack软件包

以下操作在所有节点执行

1.启用OpenStack存储库,安装stein版本的存储库

yum install centos-release-openstack-stein -y

2.升级所有软件包,如果升级后内核更新,请重启节点启用新内核。

yum upgrade

3.安装openstack客户端

# yum install python-openstackclient

4.安装 openstack-selinux软件包以自动管理OpenStack服务的安全策略:

# yum install openstack-selinux

2.2 安装mariadb数据库

以下操作在控制节点执行

大多数OpenStack服务使用SQL数据库来存储信息。数据库通常在控制器节点上运行。本次搭建使用MariaDB数据库, OpenStack服务还支持其他SQL数据库,包括 PostgreSQL等。

1.安装软件包

# yum install mariadb mariadb-server python2-PyMySQL

2.创建并编辑/etc/my.cnf.d/openstack.cnf文件并完成以下操作:

配置以下内容, bind-address设置为控制节点的管理IP地址,以使其他节点能够通过管理网络进行访问:

[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf    
[mysqld]
bind-address = 192.168.90.70
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

3.启动数据库服务并设置服务开机启动:

# systemctl start mariadb.service && systemctl enable mariadb.service 

4.运行mysql_secure_installation 脚本初始化数据库服务,并为数据库root帐户设置密码(这里设为123456):

[root@controller ~]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] Y
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

2.3 安装RabbitMQ消息队列

以下在控制节点执行

OpenStack使用消息队列(Message queue)来协调服务之间的操作和状态信息,消息队列服务通常在控制节点上运行,OpenStack支持多种消息队列服务,包括RabbitMQ, Qpid和ZeroMQ。

1.安装软件包:

# yum install rabbitmq-server

2.启动消息队列服务并设置服务开机启动

# systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service

3.添加openstack 用户,并设置密码,这里设置为123456

# rabbitmqctl add_user openstack 123456

4.为openstack用户增加配置、读取及写入相关权限

# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

5.打开web插件,并查看插件

# rabbitmq-plugins enable rabbitmq_management
# rabbitmq-plugins list

6.web访问端口15672,用户密码都是guest

OpenStack Stein版搭建详解_第3张图片

 

2.4 安装Memcached缓存数据库

以下在控制节点执行

身份认证服务使用Memcached缓存令牌,memcached服务通常在控制节点上运行。
1.安装软件包

# yum install memcached python-memcached

2.编辑/etc/sysconfig/memcached文件并完成以下操作:
使用控制节点的管理IP地址配置服务。这使其他节点能够通过管理网络进行访问:

root@controller ~]# vim  /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.90.70,::1"    #127.0.0.1改为192.168.90.70

3.启动Memcached服务并将其配置为在系统引导时启动:

# systemctl enable memcached.service && systemctl start memcached.service

 

2.5 安装Etcd服务

以下操作在控制节点执行

OpenStack服务可能使用Etcd,这是一个可靠的分布式键值存储,用于分布式密钥锁定,存储配置,跟踪服务的实时性和其他场景。
1.安装软件包

[root@controller ~]# yum install etcd

2.编辑/etc/etcd/etcd.conf文件,以控制节点管理IP地址设置相关选项,以使其他节点通过管理网络进行访问

[root@controller ~]# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/etcd/etcd.conf
[root@controller ~]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.90.70:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.90.70:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.90.70:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.90.70:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.90.70:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"

3.启动etcd服务并设为开机启动:

[root@controller ~]# systemctl enable etcd && systemctl start etcd

3 安装OpenStack服务

1.Openstack Queens部署时至少需要安装以下服务,按照下面指定的顺序安装服务:

  • 认证服务(Identity service)– keystone installation for Queens

  • 镜像服务(Image service)– glance installation for Queens

  • 计算服务(Compute service)– nova installation for Queens
  • 网络服务(Networking service)– neutron installation for Queens

2.我们建议在最小部署以上服务后也安装以下组件:

  • 仪表盘(Dashboard)– horizon installation for Queens
  • 块存储服务(Block Storage service)– cinder installation for Queens

3.1 安装keystone服务

以下在控制节点执行

本节描述如何在控制节点上安装和配置OpenStack身份认证服务,即称为keystone。出于可扩展性的目的,此配置部署了Fernet tokens和Apache HTTP服务器来处理请求。

3.1.1 认证服务概述

 OpenStack认证服务提供单一的集成点,用于管理身份验证、授权和服务目录。
 认证服务通常是用户与之交互的第一个服务。一旦经过身份验证,最终用户可以使用其身份来访问其他OpenStack服务。同样,其他OpenStack服务利用认证服务来确保用户是他们本人,并且发现部署中的其他服务在哪里。认证服务还可以与一些外部用户管理系统(如LDAP)集成。
 用户和服务可以通过使用由认证服务管理的服务目录来定位其他服务。顾名思义,服务目录是OpenStack部署中可用服务的集合。每个服务可以有一个或多个端点,每个端点可以是三种类型之一:管理员、内部或公共。在生产环境中,出于安全原因,不同的端点类型可能驻留在暴露给不同类型用户的单独网络上。例如,公共API网络可能从因特网上可见,因此客户可以管理他们的云。管理API网络可能局限于管理云基础设施的组织内的操作员。内部API网络可能局限于包含OpenStack服务的主机。此外,OpenStack支持多个区域的可扩展性。为了简单起见,本指南使用管理网络来实现所有端点类型和默认的TrimOne区域。在认证服务中创建的区域、服务和端点一起构成部署的服务目录。部署中的每个OpenStack服务需要一个服务条目,其中存储在标识服务中的相应端点。这一切都可以在认证服务安装和配置之后完成。
认证服务包含这些组件:
Server
一个中央服务器使用RESTful接口提供认证和授权服务。
Drivers
驱动程序或服务后端集成到中央服务器。它们用于访问OpenStack外部的库中的身份信息,并且可能已经存在于部署OpenStack的基础设施中(例如,SQL数据库或LDAP服务器)。
Modules
中间件模块运行在使用认证服务的OpenStack组件的地址空间中。这些模块拦截服务请求,提取用户凭据,并将其发送到集中式服务器进行授权。中间件模块和OpenStack组件之间的集成使用Python Web服务器网关接口。

3.1.2 安装和配置keystone

创建keystone数据库

1.以root用户连接到数据库服务器:

$ mysql -uroot -p

 2.创建keystone数据库:

MariaDB [(none)]> CREATE DATABASE keystone;

3.授予keystone数据库适当的访问权限:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY '123456';

注意:这里密码设置为123456

安装和配置keystone组件

1.安装软件包

# yum install openstack-keystone httpd mod_wsgi 

2.编辑并修改/etc/keystone/keystone.conf配置文件

[root@controller ~]# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/keystone/keystone.conf
[root@controller ~]# vim /etc/keystone/keystone.conf

在 [database]部分, 配置数据库访问权限:

[database]
# ...
connection = mysql+pymysql://keystone:123456@controller/keystone

注意这里的密码为123456

在[ token ]部分,配置 Fernet 令牌提供者

[token]
# ...
provider = fernet

3.填充 Identity 服务数据库:

# su -s /bin/sh -c "keystone-manage db_sync" keystone

4.初始化 Fernet 密钥库:

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

5.引导身份认证服务:

# keystone-manage bootstrap --bootstrap-password 123456 \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

替换ADMIN_PASS为管理用户的合适密码,这里为123456

配置apache http服务

1.编辑/etc/httpd/conf/httpd.conf文件并配置ServerName选项以引用控制节点:

[root@controller ~]# vim /etc/httpd/conf/httpd.conf
ServerName controller

2.创建到/usr/share/keystone/wsgi-keystone.conf文件的链接:

[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

3.启动Apache HTTP服务并配置开机启动

[root@controller ~]# systemctl enable httpd.service && systemctl start httpd.service

4.配置administrative 账户

export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

注意这里admin密码为123456,这里的密码要和keystone-manage bootstrap命令中使用的密码相同

3.1.3 创建项目和用户

Identity 服务为每个 OpenStack 服务提供身份验证服务。身份验证服务使用域、项目、用户和角色的组合。
1.创建example域,虽然“默认”域已经存在于本指南中的 keystone-manage 引导步骤中,但创建新域的正式方法是:

[root@controller ~]# openstack domain create --description "An Example Domain" example          
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | An Example Domain                |
| enabled     | True                             |
| id          | 64a5ff4a539f43208f5e715f8ebbed1e |
| name        | example                          |
| tags        | []                               |
+-------------+----------------------------------+

2.创建service项目,本指南使用一个服务项目,该项目包含添加到环境中的每个服务的唯一用户。创建服务项目:

[root@controller ~]# openstack project create --domain default \
  --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 3c0dab6b9e714efd9c874a938a050cc3 |
| is_domain   | False                            |
| name        | service                          |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

3.创建myproject项目,

常规(非管理)任务应该使用非特权项目和用户。例如,本指南创建 myproject 项目和 myuser

执行结果:

[root@controller ~]# openstack project create --domain default \
  --description "Demo Project" myproject

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Demo Project                     |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 231ad6e7ebba47d6a1e57e1cc07ae446 |
| is_domain   | False                            |
| name        | myproject                        |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

4.创建myuser用户 密码123456

[root@controller ~]# openstack user create --domain default \
  --password-prompt myuser

User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | aeda23aa78f44e859900e22c24817832 |
| name                | myuser                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

5.创建myrole角色:

[root@controller ~]# openstack role create myrole

+-----------+----------------------------------+
| Field     | Value                            |
+-----------+----------------------------------+
| domain_id | None                             |
| id        | 997ce8d05fc143ac97d83fdfb5998552 |
| name      | myrole                           |
+-----------+----------------------------------+

6.将 myrole 添加到 myproject 项目和 myuser:

$ openstack role add --project myproject --user myuser myrole

注意:该命令没有输出,另外您可以重复此过程来创建其他项目和用户。

3.1.4 验证认证服务操作

1.取消设置临时OS_AUTH_URL和OS_PASSWORD环境变量:

$ unset OS_AUTH_URL OS_PASSWORD

2.作为管理员用户,请求身份验证令牌

注意 :此命令使用管理员用户的密码(123456)

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-11-28T14:43:47+0000                                                                                                                                                                |
| id         | gAAAAABfwlQTVtThjJl890QsgmCMEOm6xAmeUO699UcZ3N0O66J6_TmiKNAUOuw62zWUq95zqcEaBMkVpKTYw6MNU0bsNGT0HaNvXdijMQUPmlgKBEk25sOXCVZ83rpKZmay4l32B07AxucOOTLZpJiTgSfKMLELdw8eVq8M2BhYfuE-yc_cXfo |
| project_id | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                        |
| user_id    | be50a96242df4ad79138747bcc654f08                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3.作为在前一节中创建的 myuser,请求一个身份验证标记:

[root@controller ~]# openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue
Password:
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-11-28T14:46:12+0000                                                                                                                                                                |
| id         | gAAAAABfwlSk_3DzFyzdlTxlYql8PT1fVrWs7AmBtAWLMRysKPCuVB-9o9bG9Ay5F6Q-bEIzaKCsP8m1U0xRdZCcC2LMZ-t8ffX5YrBjjlwtZXrda5NTFfFb9caG05k9bJ1E8W5_znuKJlVJwyTrRKlVQWge7LEVWH_brXBjBP8Mi87IgEohYH8 |
| project_id | 6ec6bbfd97fd4fc0a02df4837216b432                                                                                                                                                        |
| user_id    | 90786d01190b4cb180fde058f8426fb3                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

# 注意:此命令使用演示用户和API端口5000的密码,该端口只允许对Identity Service API进行常规(非管理员)访问。目前都为5000端口

3.1.5 创建 OpenStack 客户端环境脚本

前面的部分使用了环境变量和命令选项的组合,通过 openstack 客户机与 Identity 服务交互。为了提高客户端操作的效率,OpenStack 支持简单的客户端环境脚本,也称为 OpenRC 文件。这些脚本通常包含所有客户机的通用选项,但也支持唯一选项。
创建脚本
为管理和演示项目及用户创建客户端环境脚本。本指南的后续部分将引用这些脚本为客户端操作加载适当的凭据。
1.创建并编辑 admin-openrc 文件,并添加以下内容:

[root@controller ~]# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

注意这里的admin用户密码为123456,用您为 Identity 服务中的管理员用户选择的密码替换

2.创建并编辑demo-openrc文件并添加以下内容:

[root@controller ~]# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

注意这里的myuser用户密码为123456

用您在 Identity 服务中为演示用户选择的密码替换 demo_pass。

完成以后如下,我这里创建在/root目录下:
[root@controller ~]# ll

-rw-r--r--. 1 root root    261 Nov 28 21:56 admin-openrc
-rw-r--r--. 1 root root    266 Nov 28 21:56 demo-openrc
 

使用脚本
要以特定项目和用户身份运行客户端,只需在运行客户端环境脚本之前加载相关的客户端环境脚本即可。 例如:

1.加载admin-openrc文件以使用Identity服务的位置以及管理项目和用户凭据执行环境变量:

[root@controller ~]# . admin-openrc

2.请求身份验证令牌:

[root@controller ~]# openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-11-28T15:00:44+0000                                                                                                                                                                |
| id         | gAAAAABfwlgMgkfeeGfw5RqE4r0EGWgvofHdwLpfPt59b-B9xVM-priYr2RG8jheyqaLL8Yzf6zzgPt_UXWY_fmTcPMXFHEYfb5GlsVGednRxl7jSiKtnC61fiovIVMM1tT5bULhsr5YyyHyoXJlmQHJZ8DBkrnscMpE4Ti5I8lvbCO2UNDxiCY |
| project_id | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                        |
| user_id    | be50a96242df4ad79138747bcc654f08                                                                                                                                                        |

3.2 安装Glance服务

以下操作在控制节点执行

本节介绍如何在控制节点上安装和配置镜像服务,即glance。 为了简单起见,该配置将镜像存储在本地文件系统上。

3.2.1 镜像服务概述

 镜像服务(glance)使用户能够发现,注册和检索虚拟机镜像。 它提供了一个REST API,使您可以查询虚拟机镜像元数据并检索实际镜像。 您可以将通过镜像服务提供的虚拟机映像存储在各种位置,从简单的文件系统到对象存储系统(如OpenStack对象存储)。
 为了简单起见,本指南描述了将Image服务配置为使用文件后端,该后端上载并存储在托管Image服务的控制节点上的目录中。 默认情况下,该目录是/ var / lib / glance / images /。
OpenStack Image服务是基础架构即服务(IaaS)的核心。 它接受磁盘或服务器映像的API请求,以及来自最终用户或OpenStack Compute组件的元数据定义。 它还支持在各种存储库类型(包括OpenStack对象存储)上存储磁盘或服务器映像。
OpenStack镜像服务包括以下组件:

glance-api
接受镜像API调用以进行镜像发现,检索和存储。

glance-registry
存储,处理和检索有关镜像的元数据。 元数据包括例如大小和类型等项目。

Database
存储镜像元数据,您可以根据自己的喜好选择数据库。 大多数部署使用MySQL或SQLite。

Storage repository for image files(镜像文件的存储库)
支持各种存储库类型,包括常规文件系统(或安装在glance-api控制节点上的任何文件系统),Object Storage,RADOS块设备,VMware数据存储和HTTP。 请注意,某些存储库仅支持只读用法。

Metadata definition service(元数据定义服务)
用于供应商,管理员,服务和用户的通用API来有意义地定义他们自己的定制元数据。 此元数据可用于不同类型的资源,如镜像,开发,卷,定制和聚合。 定义包括新属性的关键字,描述,约束和它可以关联的资源类型。

3.2.2 创建glance数据库

1.创建glance数据库
以root用户连接到数据库:

# mysql -u root -p

创建glance数据库:

MariaDB [(none)]> CREATE DATABASE glance;

授予对glance数据库的正确访问权限:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
  IDENTIFIED BY '123456';

2.获取admin用户的环境变量

$ . admin-openrc

3.要创建服务凭据,请完成以下步骤
创建glance用户:

[root@controller ~]#  openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 5480c338e19f42e59b35c7965d950fa6 |
| name                | glance                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

把admin角色添加到glance用户和项目中

$ openstack role add --project service --user glance admin

说明:此条命令执行不返回信息
创建glance服务实体:
[root@controller ~]# openstack service create –name glance \
- -description “OpenStack Image” image

执行结果:

[root@controller ~]# openstack service create --name glance \
>   --description "OpenStack Image" image
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| enabled     | True                             |
| id          | f776846b998a41859d7f93cac56cf4c0 |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

4.创建镜像服务API端点
[root@controller ~]# openstack endpoint create --region RegionOne \
image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne \
image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne \
image admin http://controller:9292

执行结果:

[root@controller ~]# openstack endpoint create --region RegionOne \
>   image public http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 253e29a9b76648b1919d374eb10e152f |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f776846b998a41859d7f93cac56cf4c0 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   image internal http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | a4e5834004af45279b20008f16bcb4b9 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f776846b998a41859d7f93cac56cf4c0 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   image admin http://controller:9292
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 46707abe74924e5096b150b70056247c |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f776846b998a41859d7f93cac56cf4c0 |
| service_name | glance                           |
| service_type | image                            |
| url          | http://controller:9292           |
+--------------+----------------------------------+

3.2.3 安装和配置组件

1.安装软件包:

# yum install openstack-glance

2.编辑/etc/glance/glance-api.conf文件,完成以下操作

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/glance/glance-api.conf
# vim /etc/glance/glance-api.conf

在[database] 部分,配置数据库访问:

[database]
#..
connection = mysql+pymysql://glance:123456@controller/glance

在[keystone_authtoken] and [paste_deploy]部分,配置认证服务访问:

[keystone_authtoken]

www_authenticate_uri  = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456


[paste_deploy]

flavor = keystone

在 [glance_store]部分, 配置本地文件系统存储和映像文件的位置:

[glance_store]

stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

3.编辑/etc/glance/glance-registry.conf配置文件

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/glance/glance-registry.conf
# vim /etc/glance/glance-registry.conf

在[database]部分, 配置数据库访问:

[database]

connection = mysql+pymysql://glance:123456@controller/glance

在[keystone_authtoken] 和 [paste_deploy] 部分, 配置认证服务访问:

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[paste_deploy]

flavor = keystone

4.同步镜像服务数据库

# su -s /bin/sh -c "glance-manage db_sync" glance

5.完成安装,启动镜像服务并设为开机启动:

# systemctl enable openstack-glance-api.service \
  openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
  openstack-glance-registry.service

3.2.4 验证操作

使用CirrOS验证Image服务的操作,这是一个小型Linux映像,可帮助您测试OpenStack部署。
有关如何下载和构建映像的更多信息,请参阅OpenStack虚拟机映像指南https://docs.openstack.org/image-guide/。有关如何管理映像的信息,请参阅OpenStack最终用户指南https://docs.openstack.org/stein/user/

1.获取admin用户的环境变量

# . admin-openrc

2.下载镜像

# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

3.将镜像上传到image服务,指定磁盘格式为QCOW2,指定裸容器格式和公开可见性,以便所有项目都可以访问它:
[root@controller ~]# openstack image create “cirros” \
- -file cirros-0.4.0-x86_64-disk.img \
- -disk-format qcow2 –container-format bare \
- -public

执行结果:

[root@controller ~]# openstack image create "cirros" \
>   --file cirros-0.4.0-x86_64-disk.img \
>   --disk-format qcow2 --container-format bare \
>   --public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2020-11-28T17:30:40Z                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                      |
| file             | /v2/images/e2fc9a86-f7e3-4598-848e-2d79cb060cc2/file                                                                                                                                       |
| id               | e2fc9a86-f7e3-4598-848e-2d79cb060cc2                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                          |
| name             | cirros                                                                                                                                                                                     |
| owner            | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e2161b5b5186106570c17a9e58b64dd39390617cd5a350f78', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 12716032                                                                                                                                                                                   |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2020-11-28T17:30:41Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3.查看上传的镜像,镜像状态应为active状态

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| e2fc9a86-f7e3-4598-848e-2d79cb060cc2 | cirros | active |
+--------------------------------------+--------+--------+

glance具体配置选项可参考:
https://docs.openstack.org/glance/stein/configuration/index.html

3.3 安装compute服务

本节介绍如何在控制节点上安装和配置计算服务,代号为nova

3.3.1 计算服务概述

 使用OpenStack Compute来托管和管理云计算系统。OpenStack Compute是基础架构即服务(IaaS)系统的重要组成部分。主要模块是用Python实现的。
 OpenStack Compute与OpenStack Identity进行交互以进行身份验证; 用于磁盘和服务器映像的OpenStack映像服务; 和用于用户和管理界面的OpenStack Dashboard。镜像访问受到项目和用户的限制; 每个项目的限额是有限的(例如,实例的数量)。OpenStack Compute可以在标准硬件上水平扩展,并下载映像以启动实例。
OpenStack Compute包含以下内容及组件:
nova-api service
接受并响应最终用户计算API调用。该服务支持OpenStack Compute API。它执行一些策略并启动大多数编排活动,例如运行实例。
nova-api-metadata service
接受来自实例的元数据请求。nova-api-metadata通常在nova-network 安装多主机模式下运行时使用该服务。有关详细信息,请参阅计算管理员指南中的元数据服务。
nova-compute service
通过管理程序API创建和终止虚拟机实例的工作守护程序。例如:

  • XenAPI for XenServer/XCP
  • libvirt for KVM or QEMU
  • VMwareAPI for VMware

处理相当复杂。基本上,守护进程接受来自队列的动作并执行一系列系统命令,例如启动KVM实例并更新其在数据库中的状态。
nova-placement-api service
跟踪每个提供者的库存和使用情况。有关详情,请参阅 Placement API。
nova-scheduler service
从队列中获取虚拟机实例请求,并确定它在哪个计算服务器主机上运行。
nova-conductor module
调解nova-compute服务和数据库之间的交互。它消除了由nova-compute服务直接访问云数据库的情况 。该nova-conductor模块水平缩放。但是,请勿将其部署到nova-compute运行服务的节点上。有关更多信息,请参阅配置选项中的conductor部分 。
nova-consoleauth daemon(守护进程)
为控制台代理提供的用户授权令牌。见 nova-novncproxy和nova-xvpvncproxy。此服务必须运行以使控制台代理正常工作。您可以在群集配置中针对单个nova-consoleauth服务运行任一类型的代理。有关信息,请参阅关于nova-consoleauth。
nova-novncproxy daemon
提供通过VNC连接访问正在运行的实例的代理。支持基于浏览器的novnc客户端。
nova-spicehtml5proxy daemon
提供通过SPICE连接访问正在运行的实例的代理。支持基于浏览器的HTML5客户端。
nova-xvpvncproxy daemon
提供通过VNC连接访问正在运行的实例的代理。支持OpenStack特定的Java客户端。
The queue队列
守护进程之间传递消息的中心集线器。通常用RabbitMQ实现 ,也可以用另一个AMQP消息队列实现,例如ZeroMQ。
SQL database
存储云基础架构的大部分构建时间和运行时状态,其中包括:

  • Available instance types 可用的实例类型
  • Instances in use 正在使用的实例
  • Available networks 可用的网络
  • Projects 项目

理论上,OpenStack Compute可以支持SQLAlchemy支持的任何数据库。通用数据库是用于测试和开发工作的SQLite3,MySQL,MariaDB和PostgreSQL。

3.3.2 安装和配置控制节点

以下在控制节点执行

3.3.2.1 创建数据库

1.以root账户登录数据库

# mysql -u root -p

2.创建nova_api, nova, nova_cell0数据库

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

数据库登录授权

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
  IDENTIFIED BY '123456';

2.执行admin-openrc凭证

[root@controller ~]# . admin-openrc

3.创建计算服务凭证

创建nova用户:

[root@controller ~]# openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | fb6776d19862405ba0c42ec91b6282a0 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

为nova用户添加admin角色:

# openstack role add --project service --user nova admin

创建nova服务端点:

[root@controller ~]# openstack service create –name nova \
- -description “OpenStack Compute” compute

执行结果:

[root@controller ~]# openstack service create --name nova \
>   --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | f162653676dd41a2a4faa98ae64c33f1 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

4.创建compute API 服务端点:

[root@controller ~]# openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1

执行结果:

[root@controller ~]# openstack endpoint create --region RegionOne \
>   compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 84b34dc0bedc483990e32c48830368ab |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f162653676dd41a2a4faa98ae64c33f1 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 2c253051b30444d9807a37bdb05c4c66 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f162653676dd41a2a4faa98ae64c33f1 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 5aefc27a9f0440e2a95f1189e25f3822 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | f162653676dd41a2a4faa98ae64c33f1 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://controller:8774/v2.1      |
+--------------+----------------------------------+

5.创建一个placement服务用户

[root@controller ~]# openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | eb7cae6117f74fa6990f8e234efc9c82 |
| name                | placement                        |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

6.添加placement用户为项目服务admin角色

# openstack role add --project service --user placement admin

7.在服务目录中创建Placement API条目:

[root@controller ~]# openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Placement API                    |
| enabled     | True                             |
| id          | 656007c5c64d4c3f91fd0dff1a32a754 |
| name        | placement                        |
| type        | placement                        |
+-------------+----------------------------------+

8.创建Placement API服务端点

[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4a5b28fe29a94c03a4059d37388f88f1 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 656007c5c64d4c3f91fd0dff1a32a754 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 0f554f27f16c4387ac6a88e9f98697c4 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 656007c5c64d4c3f91fd0dff1a32a754 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+
[root@controller ~]#  openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 3aa80b39dc244907a034bf6b4fcd12e2 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 656007c5c64d4c3f91fd0dff1a32a754 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://controller:8778           |
+--------------+----------------------------------+

3.3.2.2 安装和配置组件

1.安装软件包

# yum install openstack-nova-api openstack-nova-conductor \
  openstack-nova-novncproxy \
  openstack-nova-scheduler openstack-nova-placement-api

2.编辑 /etc/nova/nova.conf文件并完成以下操作

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/nova/nova.conf
# vim /etc/nova/nova.conf

在 [DEFAULT] 部分, 只启用计算和元数据API:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

在[api_database] 和 [database] 部分, 配置数据库访问:

[api_database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova_api

[database]
# ...
connection = mysql+pymysql://nova:123456@controller/nova

在 [DEFAULT] 部分, 配置RabbitMQ 消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller

在[api] 和 [keystone_authtoken] 部分, 配置认证服务访问:

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

在[DEFAULT] 部分,使用控制节点的管理接口IP地址配置my_ip选项:

[DEFAULT]
# ...
my_ip = 192.168.90.70

在[DEFAULT] 部分, 启用对网络服务的支持:

[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在 [vnc] 部分,使用控制节点的管理接口IP地址配置VNC代理:

[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip

在[glance] 部分, 配置Image服务API的位置:

[glance]
# ...
api_servers = http://controller:9292

在 [oslo_concurrency] 部分, 配置锁定路径:

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

在 [placement] 部分, 配置 Placement API:

[placement]
# ...
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

由于软件包的一个bug,需要在/etc/httpd/conf.d/00-nova-placement-api.conf文件中添加如下配置,来启用对Placement API的访问:


   = 2.4>
      Require all granted
   
   
      Order allow,deny
      Allow from all
   

添加配置:

[root@controller ~]# vim /etc/httpd/conf.d/00-nova-placement-api.conf
Listen 8778


  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
  WSGIScriptAlias / /usr/bin/nova-placement-api
  = 2.4>
    ErrorLogFormat "%M"
  
  ErrorLog /var/log/nova/nova-placement-api.log #这里放在此行之后
  
   = 2.4>
      Require all granted
   
   
      Order allow,deny
      Allow from all
   
  
  #SSLEngine On
  #SSLCertificateFile ...
  #SSLCertificateKeyFile ...


Alias /nova-placement-api /usr/bin/nova-placement-api

  SetHandler wsgi-script
  Options +ExecCGI
  WSGIProcessGroup nova-placement-api
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On

重新启动httpd服务

# systemctl restart httpd

3.同步nova-api数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning

忽略此输出中的任何弃用消息。

4.注册cell0数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

5.创建cell1 cell

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
202872d4-90ce-446f-b676-e1c7a085c549

6.同步nova数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u'Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release')
  result = self._query(query)

7.验证 nova、 cell0、 cell1数据库是否注册正确

[root@controller ~]# nova-manage cell_v2 list_cells
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
+-------+--------------------------------------+------------------------------------
|  Name |                 UUID                 |           Transport URL            |               Database Connection               |
+-------+--------------------------------------+------------------------------------
| cell0 | 00000000-0000-0000-0000-000000000000 |               none:/               | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | f1c7672c-8127-4bc0-9f60-acc5364222dc | rabbit://openstack:****@controller |    mysql+pymysql://nova:****@controller/nova    |
+-------+--------------------------------------+------------------------------------

3.3.2.3 完成安装启动服务

启动计算服务并配置为开机启动

# systemctl enable openstack-nova-api.service \
  openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
  openstack-nova-scheduler.service \
  openstack-nova-conductor.service openstack-nova-novncproxy.service

注意:

Nova-consoleath 自18.0.0(Rocky)以来已被弃用,并将在即将发布的版本中删除。控制台代理应该部署到每个单元。如果执行新安装(而不是升级) ,那么您可能不需要安装 nova-consoleauth 服务。详细信息请参阅 workarounds.enable _ consoleauth

3.3.3 安装和配置计算节点

以下操作在计算节点执行

3.3.3.1 安装和配置组件

1.安装软件包

# yum install openstack-nova-compute

2.编辑/etc/nova/nova.conf配置文件并完成以下操作

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/nova/nova.conf
# vim /etc/nova/nova.conf

在[DEFAULT] 部分, 只启用计算和元数据API:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata

在[DEFAULT] 部分, 配置RabbitMQ 消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller

在[api] 和 [keystone_authtoken] 部分, 配置认证服务访问:

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

在[DEFAULT] 部分, 配置 my_ip选项:

[DEFAULT]
# ...
my_ip = 192.168.90.71

这里使用计算节点管理IP地址

在 [DEFAULT] 部分, 启用对网络服务的支持:

[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver

在 [vnc] 部分, 启用和配置远程控制台访问:

[vnc]
# ...
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

服务器组件侦听所有IP地址,并且代理组件只侦听计算节点的管理接口IP地址。 基本URL指示您可以使用Web浏览器访问此计算节点上实例的远程控制台的位置。
如果用于访问远程控制台的Web浏览器驻留在无法解析控制器主机名的主机上,则必须用控制节点的管理接口IP地址替换控制器。

在 [glance] 部分, 配置Image服务API的位置:

[glance]
# ...
api_servers = http://controller:9292

在 [oslo_concurrency]部分, 配置锁定路径:

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

在 [placement] 部分, 配置 Placement API:

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

3.3.3.2 完成配置启动服务

1.确定您的计算节点是否支持虚拟机的硬件加速:

$ egrep -c '(vmx|svm)' /proc/cpuinfo

如果此命令返回值为1或更大,则您的计算节点支持通常不需要额外配置的硬件加速。
如果此命令返回零值,则您的计算节点不支持硬件加速,并且您必须配置libvirt才能使用QEMU而不是KVM。(我这里返回值为4,所有并没有执行下面这一步,配置文件未做任何更改)
在/etc/nova/nova.conf文件中编辑 [libvirt] 部分: (上面返回命令为0才需要修改)

# vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type = qemu

2.启动计算服务(包括其相关性),并将其配置为在系统引导时自动启动:

# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

注意:如果NOVA计算服务无法启动,检查/var/log/nova/nova-compute.log。控制节点上的错误消息AMQP服务器:5672是不可达的,可能指示控制节点上的防火墙阻止对端口5672的访问。配置防火墙以打开控制节点上的端口5672,并重新启动计算节点上的Nova计算服务。
如果想要清除防火墙规则执行以下命令:

# iptables -F
# iptables -X
# iptables -Z

3.3.3.3 添加compute节点到cell数据库

以下在控制节点上执行

  1. 执行admin-openrc,验证有几个计算节点在数据库中
[root@controller ~]. admin-openrc
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary       | Host     | Zone | Status  | State | Updated At                 |
+----+--------------+----------+------+---------+-------+----------------------------+
|  8 | nova-compute | computer | nova | enabled | up    | 2020-11-29T02:37:30.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+

2.发现计算节点

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 202872d4-90ce-446f-b676-e1c7a085c549
Checking host mapping for compute host 'computer': 2b1c2949-b61c-4d24-9060-d086b6e61592
Creating host mapping for compute host 'computer': 2b1c2949-b61c-4d24-9060-d086b6e61592
Found 1 unmapped computes in cell: 202872d4-90ce-446f-b676-e1c7a085c549

添加新计算节点时,必须在控制节点上运行nova-manage cell_v2 discover_hosts以注册这些新计算节点。 或者,您可以在/etc/nova/nova.conf中设置适当的时间间隔:

[scheduler]
discover_hosts_in_cells_interval = 300

3.3.4 验证计算服务操作

以下操作在控制节点执行

1.列出服务组件以验证每个进程成功启动和注册:

[root@controller ~]#. admin-openrc
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary           | Host       | Zone     | Status  | State | Updated At                 |
+----+------------------+------------+----------+---------+-------+----------------------------+
|  1 | nova-consoleauth | controller | internal | enabled | up    | 2020-11-29T02:42:05.000000 |
|  2 | nova-scheduler   | controller | internal | enabled | up    | 2020-11-29T02:42:05.000000 |
|  4 | nova-conductor   | controller | internal | enabled | up    | 2020-11-29T02:42:06.000000 |
|  8 | nova-compute     | computer   | nova     | enabled | up    | 2020-11-29T02:42:00.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

此输出应显示在控制节点上启用三个服务组件,并在计算节点上启用一个服务组件。

2.列出身份服务中的API端点以验证与身份服务的连接:

[root@controller ~]#  openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name      | Type      | Endpoints                               |
+-----------+-----------+-----------------------------------------+
| nova      | compute   | RegionOne                               |
|           |           |   public: http://controller:8774/v2.1   |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8774/v2.1 |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:8774/v2.1    |
|           |           |                                         |
| glance    | image     | RegionOne                               |
|           |           |   admin: http://controller:9292         |
|           |           | RegionOne                               |
|           |           |   public: http://controller:9292        |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:9292      |
|           |           |                                         |
| keystone  | identity  | RegionOne                               |
|           |           |   public: http://controller:5000/v3/    |
|           |           | RegionOne                               |
|           |           |   admin: http://controller:5000/v3/     |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:5000/v3/  |
|           |           |                                         |
| placement | placement | RegionOne                               |
|           |           |   admin: http://controller:8778         |
|           |           | RegionOne                               |
|           |           |   internal: http://controller:8778      |
|           |           | RegionOne                               |
|           |           |   public: http://controller:8778        |
|           |           |                                         |
+-----------+-----------+-----------------------------------------+

3.列出Image服务中的镜像以验证与Image服务的连通性:

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| e2fc9a86-f7e3-4598-848e-2d79cb060cc2 | cirros | active |
+--------------------------------------+--------+--------+

4.检查cells和placement API是否正常运行

[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results          |
+--------------------------------+
| Check: Cells v2                |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Placement API           |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Request Spec Migration  |
| Result: Success                |
| Details: None                  |
+--------------------------------+
| Check: Console Auths           |
| Result: Success                |
| Details: None                  |
+--------------------------------+

nova配置参考:https://docs.openstack.org/nova/stein/admin/index.html

3.4 安装neutron服务

3.4.1 网络服务概述

OpenStack Networking(neutron)允许您创建由其他OpenStack服务管理的接口设备并将其连接到网络。可以实现插件以适应不同的网络设备和软件,为OpenStack架构和部署提供灵活性。
网络服务包含以下组件:
neutron-server
接受API请求并将其路由到适当的OpenStack Networking插件以便采取行动。
OpenStack Networking plug-ins and agents
插拔端口,创建网络或子网,并提供IP地址。这些插件和代理根据特定云中使用的供应商和技术而有所不同。OpenStack Networking带有用于思科虚拟和物理交换机,NEC OpenFlow产品,Open vSwitch,Linux桥接和VMware NSX产品的插件和代理。
通用代理是L3(第3层),DHCP(动态主机IP寻址)和插件代理。
Messaging queue
大多数OpenStack Networking安装用于在neutron-server和各种代理之间路由信息。还充当存储特定插件的网络状态的数据库。
OpenStack Networking主要与OpenStack Compute进行交互,为其实例提供网络和连接。

3.4.3 安装和配置controller节点

以下操作在控制节点执行

3.4.3.1 创建数据库

  1. 要创建数据库,需要完成以下操作
    以root用户使用数据库连接客户端连接到数据库服务器:
$ mysql -u root –p

创建neutron数据库:

MariaDB [(none)] CREATE DATABASE neutron;

授予neutron数据库适当访问权,这里密码为123456:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
  IDENTIFIED BY '123456';

2.加载管理员凭据以获得仅管理员访问的CLI命令:

$ . admin-openrc

3.创建服务凭证,完成以下操作
创建neutron用户

[root@controller ~]# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 988aa6330c27474bb54b662aeb7173a7 |
| name                | neutron                          |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

添加admin角色到neutron用户

# openstack role add --project service --user neutron admin

创建neutron服务实体
[root@controller ~]# openstack service create –name neutron \
- -description “OpenStack Networking” network

执行结果:

[root@controller ~]# openstack service create --name neutron \
>   --description "OpenStack Networking" network
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Networking             |
| enabled     | True                             |
| id          | 2547ba8ceeb1436184e78e29017f8930 |
| name        | neutron                          |
| type        | network                          |
+-------------+----------------------------------+

4.创建网络服务API端点
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696

执行结果:

[root@controller ~]# openstack endpoint create --region RegionOne  network public http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4008ce59ae324f59a1bcafd3d68df1b2 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd9f54256386473b9a98a9422e274c5a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 8d8aed23061a48f3a7b0a8aefb8d5681 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd9f54256386473b9a98a9422e274c5a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696           |
+--------------+----------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 4bdb3e41746e4b2cbeadcf1fc5227363 |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | cd9f54256386473b9a98a9422e274c5a |
| service_name | neutron                          |
| service_type | network                          |
| url          | http://controller:9696            |
+--------------+----------------------------------+

3.4.3.2 配置网络部分

您可以使用选项1和2所代表的两种体系结构之一来部署网络服务。

  • 选项1部署了仅支持将实例附加到提供者(外部)网络的最简单的可能架构。 没有自助服务(专用)网络,路由器或浮动IP地址。
    只有管理员或其他特权用户才能管理提供商网络。
  • 选项2增加了选项1,其中支持将实例附加到自助服务网络的第3层服务。
    演示或其他非特权用户可以管理自助服务网络,包括提供自助服务和提供商网络之间连接的路由器。此外,浮动IP地址可提供与使用来自外部网络(如Internet)的自助服务网络的实例的连接。 自助服务网络通常使用隧道网络。隧道网络协议(如VXLAN),选项2还支持将实例附加到提供商网络。

以下两项配置二选一:
 Networking Option 1: Provider networks
 Networking Option 2: Self-service networks

这里选择Networking Option 2: Self-service networks

1.安装组件

# yum install openstack-neutron openstack-neutron-ml2 \
  openstack-neutron-linuxbridge ebtables

2.配置服务组件
编辑/etc/neutron/neutron.conf配置文件,并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/neutron.conf 
# vim /etc/neutron/neutron.conf

在[database]部分, 配置数据库访问:

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

在[DEFAULT]部分, 启用模块化第2层(ML2)插件,路由器服务和overlapping IP addresses:

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true

在 [DEFAULT] 部分, 配置RabbitMQ消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller

在 [DEFAULT]和 [keystone_authtoken]部分, 配置认证服务访问:

[DEFAULT]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

在 [DEFAULT] 和[nova] 部分, 配置网络通知计算网络拓扑更改:

[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

在 [oslo_concurrency] 部分,配置锁定路径:

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

3.配置网络二层插件

ML2插件使用Linux桥接机制为实例构建第2层(桥接和交换)虚拟网络基础结构。

编辑/etc/neutron/plugins/ml2/ml2_conf.ini 文件并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/plugins/ml2/ml2_conf.ini
# vim /etc/neutron/plugins/ml2/ml2_conf.ini

在 [ml2]部分,启用 flat, VLAN, and VXLAN 网络:

[ml2]
# ...
type_drivers = flat,vlan,vxlan

在 [ml2]部分,启用VXLAN 自助服务网络:

[ml2]
# ...
tenant_network_types = vxlan

在 [ml2]部分, 启用Linux网桥和第2层集群机制:

[ml2]
# ...
mechanism_drivers = linuxbridge,l2population

在 [ml2]部分, 启用端口安全扩展驱动程序:

[ml2]
# ...
extension_drivers = port_security

在 [ml2_type_flat] 部分, 将提供者虚拟网络配置为扁平网络:

[ml2_type_flat]
# ...
flat_networks = provider

在 [ml2_type_vxlan] 部分, 为自助服务网络配置VXLAN网络标识符范围:

[ml2_type_vxlan]
# ...
vni_ranges = 1:1000

在 [securitygroup] 部分, 启用ipset以提高安全组规则的效率:

[securitygroup]
# ...
enable_ipset = true

4.配置linux网桥代理

Linux桥接代理为实例构建层-2(桥接和交换)虚拟网络基础结构,并处理安全组

编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini 文件并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

在 [linux_bridge]部分, 将提供者虚拟网络映射到提供者物理网络接口

[linux_bridge]
physical_interface_mappings = provider:ens35

注意:这里的ens35物理网卡是外部网络的网卡(underlying provider physical network interface)。
在[vxlan]部分中,启用vxlan隧道网络,配置处理隧道网络的物理网络接口的IP地址,并启用

layer-2 population:
[vxlan]
enable_vxlan = true
local_ip = 192.168.91.70
l2_population = true

注意这里的ip地址192.168.91.70为隧道网络控制节点的ip地址(IP address of the underlying physical network interface that handles overlay networks)

在 [securitygroup]部分, 启用安全组并配置Linux网桥iptables防火墙驱动程序:

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

通过验证下列所有SysTL值设置为1以确保Linux操作系统内核支持网桥过滤器:

$ vim /usr/lib/sysctl.d/00-system.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
$ sysctl -p

5.配置三层代理

Layer-3(L3)代理为自助虚拟网络提供路由和NAT服务。

编辑/etc/neutron/l3_agent.ini 文件并完成以下操作:

$ sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/l3_agent.ini
$ vim /etc/neutron/l3_agent.ini

在 [DEFAULT]部分, 配置Linux网桥接口驱动程序和外部网络桥接器:

[DEFAULT]
# ...
interface_driver = linuxbridge

7.配置DHCP代理
DHCP代理为虚拟网络提供DHCP服务。
编辑/etc/neutron/dhcp_agent.ini文件并完成以下操作:

$ sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/dhcp_agent.ini
$ vim /etc/neutron/dhcp_agent.ini

在[DEFAULT]部分,配置Linux网桥接口驱动程序,Dnsmasq DHCP驱动程序,并启用隔离的元数据,以便提供商网络上的实例可以通过网络访问元数据:

[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

3.4.3.5 配置metadata

元数据代理为实例提供配置信息,例如凭据。
编辑 /etc/neutron/metadata_agent.ini文件并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/metadata_agent.ini
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]

nova_metadata_host = controller
metadata_proxy_shared_secret = 123456

3.4.3.6 配置计算服务使用网络服务

编辑/etc/nova/nova.conf文件并执行以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/nova/nova.conf
# vim /etc/nova/nova.conf

在[neutron]部分,配置访问参数,启用元数据代理并配置秘密:

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

3.4.3.7 完成安装启动服务

1.网络服务初始化脚本需要一个指向ML2插件配置文件/etc/neutron/plugins/ml2/ml2_conf.ini的符号链接/etc/neutron/plugin.ini。 如果此符号链接不存在,请使用以下命令创建它:

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

2.同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

3.重启compute API服务

#systemctl restart openstack-nova-api.service

4.启动网络服务并设为开机启动

# systemctl enable neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service
# systemctl start neutron-server.service \
  neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
  neutron-metadata-agent.service

5.对于联网选项2,还启用并启动第三层服务:

# systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service

3.4.4 安装和配置compute节点

以下操作在计算节点执行

计算节点处理实例的连接和安全组。

3.4.4.1 安装和配置组件

1.安装组件

# yum install openstack-neutron-linuxbridge ebtables ipset

2.配置公共组件
网络通用组件配置包括身份验证机制,消息队列和插件。
编辑/etc/neutron/neutron.conf文件并完成以下操作:
在[database]部分,注释掉任何connection选项,因为计算节点不直接访问数据库。

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/neutron.conf
# vim /etc/neutron/neutron.conf

在[DEFAULT]部分中,配置RabbitMQ 消息队列访问:

[DEFAULT] 
#... 
transport_url = rabbit://openstack:123456@controller

在[DEFAULT]和[keystone_authtoken]部分中,配置身份服务访问:

[DEFAULT] 
#... 
auth_strategy = keystone

[keystone_authtoken] 
#... 
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

在[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency] 
#... 
lock_path = /var/lib/neutron/tmp

3.4.4.2 配置网络部分

选择您为控制器节点选择的相同网络选项以配置特定的服务。 之后,返回此处并继续配置计算服务以使用网络服务。
 网络选项1:提供商网络
 网络选项2:自助服务网络

这里选择网络选项2:自助服务网络

1.配置Linux网桥
编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

在[vxlan]部分中,启用VXLAN隧道网络,配置处理隧道网络的物理网络接口的IP地址,并启用第2层群体:

[vxlan]
enable_vxlan = true
local_ip = 192.168.91.71
l2_population = true

注意:这里的192.168.91.71为计算节点隧道网络的IP地址(underlying physical network interface that handles overlay networks)
在[securitygroup]节中,启用安全组并配置Linux网桥iptables防火墙驱动程序:

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  1. 配置计算服务使用网络服务
    编辑/etc/nova/nova.conf
vim /etc/nova/nova.conf
[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

确保您的Linux操作系统内核支持网桥过滤器,方法是验证以下所有sysctl值均设置为1:

$ vim /usr/lib/sysctl.d/00-system.conf 
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
$ sysctl -p

自动加载br_netfilter模块方法:https://editor.csdn.net/md/?articleId=110308456

3.4.4.5 完成安装启动服务

1.重启compute服务

# systemctl restart openstack-nova-compute.service

2.设置网桥服务开机启动

# systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service

3.5 安装Horizon服务

以下操作在控制节点执行

本节介绍如何在控制节点上安装和配置仪表板。
仪表板所需的唯一核心服务是身份服务。 您可以将仪表板与其他服务结合使用,例如镜像服务,计算和网络。 您还可以在具有独立服务(如对象存储)的环境中使用仪表板。

3.5.1 安装和配置组件

1.安装软件包

# yum install openstack-dashboard -y

2.编辑 /etc/openstack-dashboard/local_settings 文件并完成以下操作:

# vim /etc/openstack-dashboard/local_settings

配置仪表板以在controller节点上使用OpenStack服务 :

OPENSTACK_HOST = "controller"

允许您的主机访问仪表板:

ALLOWED_HOSTS = ['*']

或者ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’]
ALLOWED_HOSTS也可以[‘*’]接受所有主机。这对开发工作可能有用,但可能不安全,不应用于生产。

配置memcache会话存储服务:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
         'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
         'LOCATION': 'controller:11211',
    }
}

注释掉任何其他会话存储配置。
开启身份认证API 版本v3

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST

启用对域的支持:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

配置API版本:

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 2,
}

配置Default为您通过仪表板创建的用户的默认域:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

将用户配置为通过仪表板创建的用户的默认角色:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

配置时区(可选)

TIME_ZONE = "Asia/Shanghai"

4.如果未包括,则将以下行添加到/etc/httpd/conf.d/openstack-dashboard.conf。

# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}

3.5.2 完成安装启动服务

1.完成安装,重启web服务和会话存储

# systemctl restart httpd.service memcached.service

3.5.3 登录web验证配置

在浏览器中输入http://192.168.90.70/dashboard.,访问openstack的dashboard界面,

  • domain:default
  • 用户名(管理员):admin(管理员),myuser(租户)
  • 密码:123456

登录界面如下:

OpenStack Stein版搭建详解_第4张图片

当前的项目信息:

OpenStack Stein版搭建详解_第5张图片

当前上传的镜像

OpenStack Stein版搭建详解_第6张图片

 

3.6 安装Cinder服务

 块存储服务(cinder)为访客实例提供块存储设备。 存储配置和使用的方法由块存储驱动程序确定,或者在多后端配置的情况下由驱动程序确定。 有多种可用的驱动程序:NAS / SAN,NFS,iSCSI,Ceph等。
 块存储API和调度程序服务通常在控制节点上运行。 根据所使用的驱动程序,卷服务可以在控制节点,计算节点或独立存储节点上运行。
 一旦能够在OpenStack环境中“启动实例”,请按照以下说明将Cinder添加到基本环境。

3.6.1 块存储服务概述

 OpenStack块存储服务(Cinder)将持久性存储添加到虚拟机。块存储为管理卷提供基础架构,并与OpenStack Compute进行交互以提供实例卷。该服务还支持管理卷快照和卷类型。
块存储服务包含以下组件:
cinder-api
接受API请求,并将它们路由到cinder-volume操作。
cinder-volume
直接与Block Storage服务进行交互,以及诸如cinder-scheduler。它也通过消息队列与这些进程交互。该cinder-volume服务响应发送到块存储服务的读取和写入请求以保持状态。它可以通过驱动程序架构与各种存储提供商进行交互。
cinder-scheduler daemon守护进程
选择要在其上创建卷的最佳存储提供者节点。 与nova-scheduler类似的组件。
cinder-backup daemon守护进程
该cinder-backup服务可将任何类型的卷备份到备份存储提供程序。与cinder-volume服务一样,它可以通过驱动程序体系结构与各种存储提供商进行交互。
Messaging queue消息队列
路由块存储过程之间的信息。

3.6.2 安装和配置cinder节点

以下操作在cinder节点执行

3.6.2.1 安装配置LVM

 本节介绍如何为Block Storage服务安装和配置存储节点。 为简单起见,此配置引用具有空本地块存储设备的一个存储节点。 这些指令使用/ dev / sdb,但您可以将特定节点的值替换为不同的值。
 该服务使用LVM驱动程序在该设备上配置逻辑卷,并通过iSCSI传输将其提供给实例。 您可以按照这些说明进行小的修改,以便使用其他存储节点水平扩展您的环境。

1.安装支持的软件包

  • 安装LVM软件包
# yum install lvm2 device-mapper-persistent-data
  • 启动LVM元数据服务并将其配置为在系统引导时启动:
# systemctl enable lvm2-lvmetad.service && systemctl start lvm2-lvmetad.service

说明:一些发行版默认包含LVM。

2.创建LVM物理逻辑卷/dev/sdb

[root@cinder1 ~]# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

3.创建cinder-volumes逻辑卷组

[root@cinder ~]# vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created

4 .只有实例才能访问块存储卷。 但是,底层操作系统管理与卷关联的设备。 默认情况下,LVM卷扫描工具会扫描包含卷的块存储设备的/ dev目录。 如果项目在其卷上使用LVM,则扫描工具将检测这些卷并尝试缓存它们,这可能会导致底层操作系统和项目卷出现各种问题。 您必须重新配置LVM以仅扫描包含cinder-volumes卷组的设备。 编辑/etc/lvm/lvm.conf文件并完成以下操作:
在devices部分中,添加一个接受/ dev / sdb设备的过滤器并拒绝所有其他设备:

# vim /etc/lvm/lvm.conf
devices {
...
filter = [ "a/sdb/", "r/.*/"]

Each item in the filter array begins with a for accept or r for reject and includes a regular expression for the device name. The array must end with r/.*/ to reject any remaining devices. You can use the vgs -vvvv command to test filters.
过滤器数组中的每个项目都以for接受或r为拒绝开头,并包含设备名称的正则表达式。 该阵列必须以r /.*/结尾以拒绝任何剩余的设备。 您可以使用vgs -vvvv命令来测试过滤器。
如果您的存储节点在操作系统磁盘上使用LVM,则还必须将关联的设备添加到过滤器。 例如,如果/ dev / sda设备包含操作系统:

filter = [ "a/sda/", "a/sdb/", "r/.*/"]

同样,如果您的计算节点在操作系统磁盘上使用LVM,则还必须修改这些节点上/etc/lvm/lvm.conf文件中的筛选器以仅包含操作系统磁盘。 例如,如果/ dev / sda设备包含操作系统:

filter = [ "a/sda/", "r/.*/" ]

3.6.2.2 安装和配置组件

1.安装软件包

# yum install openstack-cinder targetcli python-keystone -y
  1. 编辑/etc/cinder/cinder.conf文件并完成以下操作:
# vim /etc/cinder/cinder.conf

在[database]部分中,配置数据库访问:

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

在该[DEFAULT]部分中,配置RabbitMQ 消息队列访问:

[DEFAULT]
transport_url = rabbit://openstack:123456@controller

在[DEFAULT]和[keystone_authtoken]部分中,配置身份服务访问:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

在[DEFAULT]部分中,配置my_ip选项:

my_ip = 192.168.90.72

注意这里的192.168.90.72为存储节点上管理网络接口的IP地址
在[lvm]部分中,使用LVM驱动程序,cinder-volumes卷组,iSCSI协议和相应的iSCSI服务配置LVM后端。如果该[lvm]部分不存在,请创建它:

[lvm] 
volume_driver  =  cinder.volume.drivers.lvm.LVMVolumeDriver 
volume_group  =  cinder-volumes 
iscsi_protocol  =  iscsi 
iscsi_helper  =  lioadm

在[DEFAULT]部分中,启用LVM后端:

[DEFAULT] 
#... 
enabled_backends  =  lvm

后端名称是任意的。作为示例,本指南使用驱动程序的名称作为后端的名称。
在[DEFAULT]部分中,配置Image Service API的位置:

[DEFAULT] 
#... 
glance_api_servers = http://controller:9292

在[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency] 
#... 
lock_path = /var/lib/cinder/tmp

3.6.2.3 完成安装启动服务

设置存储服务开机启动

# systemctl enable openstack-cinder-volume.service target.service 
# systemctl start openstack-cinder-volume.service target.service

3.6.3 安装和配置controller节点

本节介绍如何在控制节点上安装和配置代码为cinder的块存储服务。 此服务至少需要一个为实例提供卷的额外存储节点。

以下操作在控制节点执行

3.6.3.1 创建cinder数据库

在安装和配置块存储服务之前,您必须创建数据库,服务凭据和API端点。
要创建数据库,请完成以下步骤:
1.使用数据库访问客户端以root用户身份连接到数据库服务器:

$ mysql -u root -p

创建cinder数据库:

MariaDB [(none)]> CREATE DATABASE cinder;

授予对cinder数据库的适当访问权限:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
  IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
  IDENTIFIED BY '123456';

2.加载admin凭据

$ . admin-openrc

3.要创建服务凭据,请完成以下步骤:
创建一个cinder用户:

[root@controller ~]# openstack user create --domain default --password-prompt cinder
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 2df1d57dd00a418080d3b5ed8eb4c2a0 |
| name                | cinder                           |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

添加admin角色到cinder用户:

$ openstack role add --project service --user cinder admin

注意,此命令无输出结果

创建cinderv2和cinderv3服务实体:

# openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3

执行结果:

[root@controller ~]# openstack service create --name cinderv2 \
>   --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | 3644ab2053cb4ab5b5e754548f6276fa |
| name        | cinderv2                         |
| type        | volumev2                         |
+-------------+----------------------------------+

[root@controller ~]# openstack service create --name cinderv3 \
>   --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Block Storage          |
| enabled     | True                             |
| id          | a220c2f4900c4622a6cd65a5f093cddb |
| name        | cinderv3                         |
| type        | volumev3                         |
+-------------+----------------------------------+

注意:块存储服务需要两个服务实体。

4.创建块存储服务API端点:

$ openstack endpoint create --region RegionOne \
  volumev2 public http://controller:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
  volumev2 internal http://controller:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
  volumev2 admin http://controller:8776/v2/%\(project_id\)s

$ openstack endpoint create --region RegionOne \
  volumev3 public http://controller:8776/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
  volumev3 internal http://controller:8776/v3/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
  volumev3 admin http://controller:8776/v3/%\(project_id\)s

 

执行结果:

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | d79c21958c104b73998b8a0a2656ac1a         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 3644ab2053cb4ab5b5e754548f6276fa         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | f267e9427c8d4087b457a5ec7342ace4         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 3644ab2053cb4ab5b5e754548f6276fa         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | c89c663ccee14ecd826f3402446f9f74         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | 3644ab2053cb4ab5b5e754548f6276fa         |
| service_name | cinderv2                                 |
| service_type | volumev2                                 |
| url          | http://controller:8776/v2/%(project_id)s |
+--------------+------------------------------------------+

[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 03ffabf166eb4d51959f9bafd9b75ad4         |
| interface    | public                                   |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | a220c2f4900c4622a6cd65a5f093cddb         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 8aca3fa06384499fbcec3f6ed5fb6299         |
| interface    | internal                                 |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | a220c2f4900c4622a6cd65a5f093cddb         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+
[root@controller ~]# openstack endpoint create --region RegionOne \
>   volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+------------------------------------------+
| Field        | Value                                    |
+--------------+------------------------------------------+
| enabled      | True                                     |
| id           | 3cef8b97ec5543a1a3435c6f2df133db         |
| interface    | admin                                    |
| region       | RegionOne                                |
| region_id    | RegionOne                                |
| service_id   | a220c2f4900c4622a6cd65a5f093cddb         |
| service_name | cinderv3                                 |
| service_type | volumev3                                 |
| url          | http://controller:8776/v3/%(project_id)s |
+--------------+------------------------------------------+

3.6.3.2 安装和配置组件

1.安装软件包:

# yum install openstack-cinder

2.编辑/etc/cinder/cinder.conf文件并完成以下操作:

# sed -i.bak -e'/^#/d' -e'/^$/d' /etc/cinder/cinder.conf
# vim /etc/cinder/cinder.conf

在[database]部分中,配置数据库访问:

[database]
# ...
connection = mysql+pymysql://cinder:123456@controller/cinder

在[DEFAULT]部分中,配置RabbitMQ 消息队列访问:

[DEFAULT]
# ...
transport_url = rabbit://openstack:123456@controller

在[DEFAULT]和[keystone_authtoken]部分中,配置身份服务访问:

[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = 123456

在[DEFAULT]部分中,将该my_ip选项配置为使用控制节点的管理接口IP地址:

[DEFAULT]
# ...
my_ip = 192.168.90.70

在该[oslo_concurrency]部分中,配置锁定路径:

[oslo_concurrency] 
#... 
lock_path = /var/lib/cinder/tmp

4.同步块存储数据库:

# su -s /bin/sh -c "cinder-manage db sync" cinder
Option "logdir" from group "DEFAULT" is deprecated. Use option "log-dir" from group "DEFAULT".

忽略此输出中的任何弃用消息。

3.6.3.3 配置计算服务使用块存储

编辑/etc/nova/nova.conf文件并添加以下内容:

# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne

3.6.3.4 完成安装启动服务

重新启动Compute API服务:

# systemctl restart openstack-nova-api.service

启动块存储服务并将其配置为在系统引导时启动:

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

3.6.4 验证cinder配置

验证Cinder操作,在控制器节点上执行这些命令。
1.输入管理员凭据以访问仅限管理员的CLI命令:
$ . admin-openrc
列出服务组件以验证每个进程的成功启动:

[root@controller ~]# openstack volume service list
+------------------+------------+------+---------+-------+----------------------------+
| Binary           | Host       | Zone | Status  | State | Updated At                 |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up    | 2020-11-29T05:19:09.000000 |
+------------------+------------+------+---------+-------+----------------------------+

4 创建虚拟机实例

创建过程:
1) 创建虚拟网络
2) 创建m1.nano规格的主机(相等于定义虚拟机的硬件配置)
3) 生成一个密钥对(openstack的原理是不使用密码连接,而是使用密钥对进行连接)
4) 增加安全组规则(用iptables做的安全组)
5) 启动一个实例(启动虚拟机有三种类型:1.命令CLI 2.api 3.Dashboard)实际上Dashboard也是通过api进行操作
6) 虚拟网络分为提供者网络和私有网络,提供者网络就是跟主机在同一个网络里,私有网络自定义路由器等,跟主机不在一个网络

4.1 创建外部网络

以下所有操作在控制节点执行

为配置Neutron时选择的网络选项创建虚拟网络。 如果您选择选项1,则只创建提供商网络。 如果您选择了选项2,请创建提供商和自助服务网络。
 Provider network
 Self-service network

在为您的环境创建适当的网络后,您可以继续准备环境以启动实例。

提供者网络-provider网络
在启动实例之前,您必须创建必要的虚拟网络基础结构,管理员或其他特权用户必须创建此网络,因为它直接连接到物理网络基础结构。

4.1.1 创建provider外部网络

在控制节点上,获取admin用户凭证以访问仅管理员的CLI命令:

$ . admin-openrc

创建虚拟网络(网络名为provider):

$ openstack network create --share --external \
  --provider-physical-network provider \
  --provider-network-type flat provider

执行结果:

[root@controller ~]#  openstack network create --share --external \
>   --provider-physical-network provider \
>   --provider-network-type flat provider
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                                                              |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up            | UP                                                                                                                                                                                 |
| availability_zone_hints   |                                                                                                                                                                                    |
| availability_zones        |                                                                                                                                                                                    |
| created_at                | 2020-11-29T10:24:58Z                                                                                                                                                               |
| description               |                                                                                                                                                                                    |
| dns_domain                | None                                                                                                                                                                               |
| id                        | a8204752-7bb3-480a-a552-b47583d8d21f                                                                                                                                               |
| ipv4_address_scope        | None                                                                                                                                                                               |
| ipv6_address_scope        | None                                                                                                                                                                               |
| is_default                | False                                                                                                                                                                              |
| is_vlan_transparent       | None                                                                                                                                                                               |
| location                  | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'admin', 'id': u'8c17b9e0e7ff4393b4b1a883ca8efffe'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| mtu                       | 1500                                                                                                                                                                               |
| name                      | provider                                                                                                                                                                           |
| port_security_enabled     | True                                                                                                                                                                               |
| project_id                | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                   |
| provider:network_type     | flat                                                                                                                                                                               |
| provider:physical_network | provider                                                                                                                                                                           |
| provider:segmentation_id  | None                                                                                                                                                                               |
| qos_policy_id             | None                                                                                                                                                                               |
| revision_number           | 1                                                                                                                                                                                  |
| router:external           | External                                                                                                                                                                           |
| segments                  | None                                                                                                                                                                               |
| shared                    | True                                                                                                                                                                               |
| status                    | ACTIVE                                                                                                                                                                             |
| subnets                   |                                                                                                                                                                                    |
| tags                      |                                                                                                                                                                                    |
| updated_at                | 2020-11-29T10:24:58Z                                                                                                                                                               |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

参数说明:
- -share选项允许所有项目使用虚拟网络。
- -external选项将虚拟网络定义为外部。 如果你想创建一个内部网络,你可以使用–internal代替。 默认值是内部的。
–provider-physical-network提供者和–provider-network-type平面选项使用来自以下文件的信息将扁平虚拟网络连接到主机上eth1接口上的扁平(本地/非标记)物理网络:

使用命令查看创建的网络:

[root@controller ~]# openstack network list
+--------------------------------------+----------+---------+
| ID                                   | Name     | Subnets |
+--------------------------------------+----------+---------+
| a8204752-7bb3-480a-a552-b47583d8d21f | provider |         |
+--------------------------------------+----------+---------+

以admin用户登录dashboard查看创建的网络:
这里写图片描述
查看网络拓扑图:
这里写图片描述

4.1.2 网络中创建子网

在外部网络上创建一个子网: (子网名为provider-sub)

# openstack subnet create --network provider \
  --allocation-pool start=192.168.92.80,end=192.168.92.90 \
  --dns-nameserver 114.114.114.114 --gateway 192.168.92.2 \
  --subnet-range 192.168.92.0/24 provider-sub

执行结果:

[root@controller ~]# openstack subnet create --network provider \
>   --allocation-pool start=192.168.92.80,end=192.168.92.90 \
>   --dns-nameserver 114.114.114.114 --gateway 192.168.92.2 \
>   --subnet-range 192.168.92.0/24 provider
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                                                                                              |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools  | 192.168.92.80-192.168.92.90                                                                                                                                                        |
| cidr              | 192.168.92.0/24                                                                                                                                                                    |
| created_at        | 2020-11-29T10:26:43Z                                                                                                                                                               |
| description       |                                                                                                                                                                                    |
| dns_nameservers   | 114.114.114.114                                                                                                                                                                    |
| enable_dhcp       | True                                                                                                                                                                               |
| gateway_ip        | 192.168.92.2                                                                                                                                                                       |
| host_routes       |                                                                                                                                                                                    |
| id                | 81c27b53-03db-4058-b4a0-398662345d41                                                                                                                                               |
| ip_version        | 4                                                                                                                                                                                  |
| ipv6_address_mode | None                                                                                                                                                                               |
| ipv6_ra_mode      | None                                                                                                                                                                               |
| location          | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'admin', 'id': u'8c17b9e0e7ff4393b4b1a883ca8efffe'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| name              | provider-sub                                                                                                                                                                       |
| network_id        | a8204752-7bb3-480a-a552-b47583d8d21f                                                                                                                                               |
| prefix_length     | None                                                                                                                                                                               |
| project_id        | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                   |
| revision_number   | 0                                                                                                                                                                                  |
| segment_id        | None                                                                                                                                                                               |
| service_types     |                                                                                                                                                                                    |
| subnetpool_id     | None                                                                                                                                                                               |
| tags              |                                                                                                                                                                                    |
| updated_at        | 2020-11-29T10:26:43Z                                                                                                                                                               |
+-------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

参数说明
用CIDR表示法将PROVIDER_NETWORK_CIDR替换为提供商物理网络上的子网。
将START_IP_ADDRESS和END_IP_ADDRESS替换为要为实例分配的子网内范围的第一个和最后一个IP地址。 该范围不得包含任何现有的活动IP地址。
将DNS_RESOLVER替换为DNS解析器的IP地址。 在大多数情况下,您可以使用主机上/etc/resolv.conf文件中的一个。
将PROVIDER_NETWORK_GATEWAY替换为提供商网络上的网关IP地址,通常为“.1”IP地址。
- -network,指定创建的子网名称
- -subnet-range 后边的provider为要创建子网的网络(要跟上面创建网络的名称对应起来)
查看创建的子网

[root@controller ~]# openstack subnet list
+--------------------------------------+--------------+--------------------------------------+-----------------+
| ID                                   | Name         | Network                              | Subnet          |
+--------------------------------------+--------------+--------------------------------------+-----------------+
| dfb662d2-4abc-48d0-9f2e-cc144000e607 | provider-sub | 13205b14-7730-45ea-9c45-f4ec0d1aa203 | 192.168.92.0/24 |
+--------------------------------------+--------------+--------------------------------------+-----------------+

以admin用户登录dashboard查看创建的子网
OpenStack Stein版搭建详解_第7张图片
查看网络拓扑图变化:
这里写图片描述

4.1.3 查看节点网卡变化

OpenStack中创建的实例想要访问外网必须要创建外部网络(即provider network),然后通过虚拟路由器连接外部网络和租户网络,Neutron网桥的方式实现外网的访问,当Neutron创建外部网络并创建子网后会创建一个新的网桥,并且将ens35这块外部网卡加入网桥,执行ifconfig可以看到多了一个brqa8204752-7b的网桥:

brqa8204752-7b: flags=4163  mtu 1500
        inet 192.168.92.70  netmask 255.255.255.0  broadcast 192.168.92.255
        inet6 fe80::68d6:ffff:fe54:e051  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:5b  txqueuelen 1000  (Ethernet)
        RX packets 6  bytes 426 (426.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 1058 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.90.70  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::fc0d:675:2ad6:f897  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:47  txqueuelen 1000  (Ethernet)
        RX packets 27308  bytes 8017088 (7.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26886  bytes 12815487 (12.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens34: flags=4163  mtu 1500
        inet 192.168.91.70  netmask 255.255.255.0  broadcast 192.168.91.255
        inet6 fe80::1ddb:eab2:f3d4:2273  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:51  txqueuelen 1000  (Ethernet)
        RX packets 98  bytes 11487 (11.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18  bytes 1382 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens35: flags=4163  mtu 1500
        inet6 fe80::a786:42fa:f068:a716  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:5b  txqueuelen 1000  (Ethernet)
        RX packets 31582  bytes 45610870 (43.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5384  bytes 358143 (349.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 361780  bytes 116592851 (111.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 361780  bytes 116592851 (111.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapaa8ecb85-65: flags=4163  mtu 1500
        ether 0a:91:4f:8c:01:21  txqueuelen 1000  (Ethernet)
        RX packets 5  bytes 446 (446.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 608 (608.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

安装网桥工具

[root@controller ~]# yum install -y bridge-utils

查看该网桥信息,其中,brqa8204752-7b网桥的 tapaa8ecb85-65接口分别连接了ens35物理网卡

[root@controller ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
brqa8204752-7b          8000.000c29f2f95b       no              ens35
                                                        tapaa8ecb85-65

4.2 创建租户网络

如果选择联网选项2,则还可以创建通过NAT连接到物理网络基础结构的自助服务(专用)网络。该网络包括一个为实例提供IP地址的DHCP服务器。此网络上的实例可以自动访问外部网络,如Internet。但是,从外部网络(例如Internet)访问此网络上的实例需要浮动IP地址
这个demo或其他非特权用户可以创建这个网络,因为它仅提供与demo项目内实例的连接。
Warning

您必须在自助服务网络之前创建提供商网络。

4.2.1 创建self-service网络

1.在控制节点上,获取凭据demo-openrc:

$ . demo-openrc

2.创建网络: selfservice

[root@controller ~]# openstack network create selfservice
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                                                                  |
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up            | UP                                                                                                                                                                                     |
| availability_zone_hints   |                                                                                                                                                                                        |
| availability_zones        |                                                                                                                                                                                        |
| created_at                | 2020-11-29T11:42:37Z                                                                                                                                                                   |
| description               |                                                                                                                                                                                        |
| dns_domain                | None                                                                                                                                                                                   |
| id                        | 91c96c8a-d50d-407d-975e-a3264005743c                                                                                                                                                   |
| ipv4_address_scope        | None                                                                                                                                                                                   |
| ipv6_address_scope        | None                                                                                                                                                                                   |
| is_default                | False                                                                                                                                                                                  |
| is_vlan_transparent       | None                                                                                                                                                                                   |
| location                  | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'myproject', 'id': u'6ec6bbfd97fd4fc0a02df4837216b432'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| mtu                       | 1450                                                                                                                                                                                   |
| name                      | selfservice                                                                                                                                                                            |
| port_security_enabled     | True                                                                                                                                                                                   |
| project_id                | 6ec6bbfd97fd4fc0a02df4837216b432                                                                                                                                                       |
| provider:network_type     | None                                                                                                                                                                                   |
| provider:physical_network | None                                                                                                                                                                                   |
| provider:segmentation_id  | None                                                                                                                                                                                   |
| qos_policy_id             | None                                                                                                                                                                                   |
| revision_number           | 1                                                                                                                                                                                      |
| router:external           | Internal                                                                                                                                                                               |
| segments                  | None                                                                                                                                                                                   |
| shared                    | False                                                                                                                                                                                  |
| status                    | ACTIVE                                                                                                                                                                                 |
| subnets                   |                                                                                                                                                                                        |
| tags                      |                                                                                                                                                                                        |
| updated_at                | 2020-11-29T11:42:37Z                                                                                                                                                                   |
+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

非特权用户通常不能为该命令提供额外的参数。该服务使用来自以下文件的信息自动选择参数:

# cat /etc/neutron/plugins/ml2/ml2_conf.ini
ml2_conf.ini:
[ml2]
tenant_network_types = vxlan
[ml2_type_vxlan]
vni_ranges = 1:1000

创建的内部网络类型是由tenant_network_types中指定,为vxlan。该配置能指定内部网络类型,如flat,vlan,gre等。
查看创建的网络:

[root@controller ~]# openstack network list
+--------------------------------------+-------------+--------------------------------------+
| ID                                   | Name        | Subnets                              |
+--------------------------------------+-------------+--------------------------------------+
| 91c96c8a-d50d-407d-975e-a3264005743c | selfservice |                                      |
| a8204752-7bb3-480a-a552-b47583d8d21f | provider    | 81c27b53-03db-4058-b4a0-398662345d41 |
+--------------------------------------+-------------+--------------------------------------+

在dashboard上查看创建的网络:OpenStack Stein版搭建详解_第8张图片

4.2.2 网络中创建子网

在网络上创建子网: selfservice-sub

$ openstack subnet create --network selfservice \
  --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 \
  --subnet-range 172.16.1.0/24 selfservice-sub

执行结果:

[root@controller ~]# openstack subnet create --network selfservice \
>   --dns-nameserver 114.114.114.114 --gateway 172.16.1.1 \
>   --subnet-range 172.16.1.0/24 selfservice-sub
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                                                                                                  |
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools  | 172.16.1.2-172.16.1.254                                                                                                                                                                |
| cidr              | 172.16.1.0/24                                                                                                                                                                          |
| created_at        | 2020-11-29T11:48:58Z                                                                                                                                                                   |
| description       |                                                                                                                                                                                        |
| dns_nameservers   | 114.114.114.114                                                                                                                                                                        |
| enable_dhcp       | True                                                                                                                                                                                   |
| gateway_ip        | 172.16.1.1                                                                                                                                                                             |
| host_routes       |                                                                                                                                                                                        |
| id                | 0e06efb8-e1b6-4f77-adc8-c63c2eabd66a                                                                                                                                                   |
| ip_version        | 4                                                                                                                                                                                      |
| ipv6_address_mode | None                                                                                                                                                                                   |
| ipv6_ra_mode      | None                                                                                                                                                                                   |
| location          | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'myproject', 'id': u'6ec6bbfd97fd4fc0a02df4837216b432'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| name              | selfservice-sub                                                                                                                                                                        |
| network_id        | 91c96c8a-d50d-407d-975e-a3264005743c                                                                                                                                                   |
| prefix_length     | None                                                                                                                                                                                   |
| project_id        | 6ec6bbfd97fd4fc0a02df4837216b432                                                                                                                                                       |
| revision_number   | 0                                                                                                                                                                                      |
| segment_id        | None                                                                                                                                                                                   |
| service_types     |                                                                                                                                                                                        |
| subnetpool_id     | None                                                                                                                                                                                   |
| tags              |                                                                                                                                                                                        |
| updated_at        | 2020-11-29T11:48:58Z                                                                                                                                                                   |
+-------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

查看创建的子网:

[root@controller ~]# openstack subnet list
[root@controller ~]# openstack subnet list
+--------------------------------------+-----------------+--------------------------------------+-----------------+
| ID                                   | Name            | Network                              | Subnet          |
+--------------------------------------+-----------------+--------------------------------------+-----------------+
| 81c27b53-03db-4058-b4a0-398662345d41 | provider-sub    | a8204752-7bb3-480a-a552-b47583d8d21f | 192.168.92.0/24 |
| e65341bc-0ac1-4a3e-8730-54fbc3430de8 | selfservice-sub | 91c96c8a-d50d-407d-975e-a3264005743c | 172.16.1.0/24   |
+--------------------------------------+-----------------+--------------------------------------+-----------------+

切换到myuser用户登录dashboard查看网络拓扑图:OpenStack Stein版搭建详解_第9张图片

查看计算节点网卡变化

[root@compute1 ~]# ifconfig
brq891787bb-ce: flags=4163  mtu 1500
        inet 192.168.91.71  netmask 255.255.255.0  broadcast 192.168.91.255
        ether 00:0c:29:c7:ba:a5  txqueuelen 1000  (Ethernet)
        RX packets 107  bytes 15563 (15.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 130  bytes 17653 (17.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.90.71  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::b01a:e132:1923:175  prefixlen 64  scopeid 0x20
        inet6 fe80::2801:f5c2:4e5a:d003  prefixlen 64  scopeid 0x20
        ether 00:0c:29:c7:ba:9b  txqueuelen 1000  (Ethernet)
        RX packets 2591  bytes 1036598 (1012.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2555  bytes 2019296 (1.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens37: flags=4163  mtu 1500
        inet6 fe80::8af8:8b5c:793f:e719  prefixlen 64  scopeid 0x20
        ether 00:0c:29:c7:ba:a5  txqueuelen 1000  (Ethernet)
        RX packets 105  bytes 16893 (16.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 165  bytes 21265 (20.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens38: flags=4163  mtu 1500
        inet 192.168.92.71  netmask 255.255.255.0  broadcast 192.168.92.255
        inet6 fe80::e8d3:6442:89c0:cd4a  prefixlen 64  scopeid 0x20
        ether 00:0c:29:c7:ba:af  txqueuelen 1000  (Ethernet)
        RX packets 110  bytes 12508 (12.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173  bytes 12172 (11.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4.2.3 创建路由器

自助服务网络使用通常执行双向NAT的虚拟路由器连接到提供商网络。每个路由器至少包含一个自助服务网络上的接口和提供商网络上的网关。

提供商网络必须包含router:external选项以使自助服务路由器能够使用它来连接到外部网络,例如互联网。这个admin或其他特权用户必须在网络创建期间包含此选项或稍后添加它。在这种情况下,该 router:external选项–external在创建provider网络时通过使用该参数进行设置。

1.在控制节点上,demo获取凭据以访问仅限用户的CLI命令:

$ . demo-openrc

2.创建路由器:

[root@controller ~]# openstack router create router
+-------------------------+--------------------------------------+
| Field                   | Value                                |
+-------------------------+--------------------------------------+
| admin_state_up          | UP                                   |
| availability_zone_hints |                                      |
| availability_zones      |                                      |
| created_at              | 2018-06-11T04:58:58Z                 |
| description             |                                      |
| distributed             | False                                |
| external_gateway_info   | None                                 |
| flavor_id               | None                                 |
| ha                      | False                                |
| id                      | be1a4882-bc3f-43b4-9570-1414e1fae952 |
| name                    | router                               |
| project_id              | f3f0c1a2a9f74aa6ac030671f4c7ec33     |
| revision_number         | 1                                    |
| routes                  |                                      |
| status                  | ACTIVE                               |
| tags                    |                                      |
| updated_at              | 2018-06-11T04:58:58Z                 |
+-------------------------+--------------------------------------+

查看创建的路由器:

[root@controller ~]# openstack router list
+--------------------------------------+--------+--------+-------+-------------+-----
| ID                                   | Name   | Status | State | Distributed | HA    | Project                          |
+--------------------------------------+--------+--------+-------+-------------+-----
| be1a4882-bc3f-43b4-9570-1414e1fae952 | router | ACTIVE | UP    | False       | False | f3f0c1a2a9f74aa6ac030671f4c7ec33 |
+--------------------------------------+--------+--------+-------+-------------+-----

登录dashboard查看创建的路由器:

这里写图片描述

4.2.4 租户网络添加到路由器

将自助服务网络子网添加为路由器上的接口:

[root@controller ~]# neutron router-interface-add router selfservice-sub
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Added interface d6ef6924-a4a2-4746-9bf7-9c7c1b13f32c to router router.

4.2.5 路由器连接到外部网络

在路由器上的提供商网络上设置网关:

[root@controller ~]# neutron router-gateway-set router provider
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Set gateway for router router

切换demo用户登录dashboard查看网络拓扑图变化:
这里写图片描述

4.2.6 验证操作

我们建议您在继续之前验证操作并解决所有问题。 以下步骤使用网络和子网创建示例中的IP地址范围。
1.在控制器节点上,输入管理员凭据以访问仅限管理员的CLI命令:

$ . admin-openrc

2.列出网络名称空间。你应该看到一个qrouter命名空间和两个 qdhcp命名空间。

[root@controller ~]# ip netns
qrouter-be1a4882-bc3f-43b4-9570-1414e1fae952 (id: 2)
qdhcp-a248893f-0267-4aa4-8766-ac87e525a057 (id: 1)
qdhcp-891787bb-ce4a-4b41-b222-1493ec30035c (id: 0)

3.列出路由器上的端口以确定提供商网络上的网关IP地址:

[root@controller ~]# neutron router-port-list router
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | tenant_id                        | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+
| 21ce30f1-8252-4bea-a9b2-5910056182de |      | 6ec6bbfd97fd4fc0a02df4837216b432 | fa:16:3e:68:ef:52 | {"subnet_id": "e65341bc-0ac1-4a3e-8730-54fbc3430de8", "ip_address": "172.16.1.1"}    |
| 5aafdf3b-79c0-4fb1-b76f-31138a2d3f50 |      |                                  | fa:16:3e:7a:78:34 | {"subnet_id": "81c27b53-03db-4058-b4a0-398662345d41", "ip_address": "192.168.92.82"} |
+--------------------------------------+------+----------------------------------+-------------------+--------------------------------------------------------------------------------------+

4.从控制节点或物理提供商网络上的任何主机ping此IP地址:

[root@controller ~]# ping 192.168.92.82 -c4
PING 192.168.92.82 (192.168.92.82) 56(84) bytes of data.
64 bytes from 192.168.92.82: icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from 192.168.92.82: icmp_seq=2 ttl=64 time=0.098 ms
64 bytes from 192.168.92.82: icmp_seq=3 ttl=64 time=0.129 ms
64 bytes from 192.168.92.82: icmp_seq=4 ttl=64 time=0.067 ms

--- 192.168.92.82 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.067/0.111/0.152/0.033 ms

4.3 创建实例类型

最小的默认flavor消耗每个实例512 MB的内存。 对于包含少于4 GB内存的计算节点的环境,我们建议创建每个实例仅需要64 MB的m1.nano特征。 为了测试目的,请仅将CirrOS图像用于此flavor。

[root@controller ~]# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 0       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 64      |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

这里创建两种规格的实例类型:

[root@controller ~]# openstack flavor create --id 1 --vcpus 1 --ram 1024 --disk 10 m2.nano 
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 10      |
| id                         | 1       |
| name                       | m2.nano |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 1024    |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+

参数说明:
openstack flavor create 创建主机
- -id 主机ID
- -vcpus cpu数量
- -ram 64(默认是MB,可以写成G)
- -disk 磁盘(默认单位是G)
查看创建的实例类型:

[root@controller ~]# openstack flavor list
+----+---------+------+------+-----------+-------+-----------+
| ID | Name    |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+------+------+-----------+-------+-----------+
| 0  | m1.nano |   64 |    1 |         0 |     1 | True      |
| 1  | m2.nano | 1024 |   10 |         0 |     1 | True      |
+----+---------+------+------+-----------+-------+-----------+

切换到admin用户查看创建的实例类型:
这里写图片描述

4.4 生成秘钥对

大多数云镜像支持公钥认证,而不是传统的密码认证。 在启动实例之前,您必须将公钥添加到Compute服务。
1.加载demo项目凭证:

$ . demo-openrc

2.生成密钥对并添加公钥:
生成密钥文件(一个公钥文件和一个私钥文件),保存在/root/.ssh/id

[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa): 
[root@controller ~]# ll /root/.ssh/
total 12
-rw------- 1 root root 1675 Jun 11 13:26 id_rsa        #生成的私钥文件
-rw-r--r-- 1 root root  397 Jun 11 13:26 id_rsa.pub    #生成的公钥文件

创建秘钥对,并将生成的公钥文件添加到秘钥对:

 [root@controller ~]# openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | 6a:2a:05:f3:a8:98:c8:23:65:00:92:6d:24:ee:60:f4 |
| name        | mykey                                           |
| user_id     | a331e97f2ac6444484371a70e1299636                |
+-------------+-------------------------------------------------+

3.验证密钥对是否添加成功:

[root@controller ~]# openstack keypair list  
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 6a:2a:05:f3:a8:98:c8:23:65:00:92:6d:24:ee:60:f4 |
+-------+-------------------------------------------------+

登录dashboard查看创建的秘钥对:
这里写图片描述

4.5 添加安全组规则

默认情况下,默认安全组适用于所有实例,并包含拒绝对实例进行远程访问的防火墙规则。 对于像CirrOS这样的Linux映像,我们建议至少允许ICMP(ping)和安全shell(SSH)。
向default安全组添加规则:
1.允许ICMP(ping):

[root@controller ~]# openstack security group rule create --proto icmp default
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2018-06-11T05:32:26Z                 |
| description       |                                      |
| direction         | ingress                              |
| ether_type        | IPv4                                 |
| id                | 30510627-2cfc-4028-9e38-a99a2a581f16 |
| name              | None                                 |
| port_range_max    | None                                 |
| port_range_min    | None                                 |
| project_id        | f3f0c1a2a9f74aa6ac030671f4c7ec33     |
| protocol          | icmp                                 |
| remote_group_id   | None                                 |
| remote_ip_prefix  | 0.0.0.0/0                            |
| revision_number   | 0                                    |
| security_group_id | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| updated_at        | 2018-06-11T05:32:26Z                 |
+-------------------+--------------------------------------+

2.允许安全shell(SSH)访问:

[root@controller ~]#  openstack security group rule create --proto tcp --dst-port 22 default
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| created_at        | 2018-06-11T05:33:30Z                 |
| description       |                                      |
| direction         | ingress                              |
| ether_type        | IPv4                                 |
| id                | 0f19058a-0bfd-424e-bedd-6249e00d7043 |
| name              | None                                 |
| port_range_max    | 22                                   |
| port_range_min    | 22                                   |
| project_id        | f3f0c1a2a9f74aa6ac030671f4c7ec33     |
| protocol          | tcp                                  |
| remote_group_id   | None                                 |
| remote_ip_prefix  | 0.0.0.0/0                            |
| revision_number   | 0                                    |
| security_group_id | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| updated_at        | 2018-06-11T05:33:30Z                 |
+-------------------+--------------------------------------+

查看安全组及创建的安全组规则:

[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------
| ID                                   | Name    | Description            | Project                          |
+--------------------------------------+---------+------------------------+----------
| f2a06fdc-0799-4289-b758-eeec43c16a55 | default | Default security group | f3f0c1a2a9f74aa6ac030671f4c7ec33 |
+--------------------------------------+---------+------------------------+----------
[root@controller ~]# openstack security group rule list
+--------------------------------------+-------------+-----------+------------+------
| ID                                   | IP Protocol | IP Range  | Port Range | Remote Security Group                | Security Group                       |
+--------------------------------------+-------------+-----------+------------+------
| 0f19058a-0bfd-424e-bedd-6249e00d7043 | tcp         | 0.0.0.0/0 | 22:22      | None                                 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| 20b109d3-bdf1-419e-87d7-1836313b208b | None        | None      |            | f2a06fdc-0799-4289-b758-eeec43c16a55 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| 30510627-2cfc-4028-9e38-a99a2a581f16 | icmp        | 0.0.0.0/0 |            | None                                 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| 6772b2c2-c5fc-4eea-bfe3-b9f2a8dad0cb | None        | None      |            | None                                 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| 97e3620d-6f2a-459a-b712-257fa6e31f9d | None        | None      |            | f2a06fdc-0799-4289-b758-eeec43c16a55 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
| e7dcfc28-924f-455b-b8dd-cfe3e1325c9f | None        | None      |            | None                                 | f2a06fdc-0799-4289-b758-eeec43c16a55 |
+--------------------------------------+-------------+-----------+------------+------

切换到demo用户登录dashboard查看创建的安全组规则:
这里写图片描述

4.6 确认实例选项

要启动实例,必须至少指定flavor、镜像名称、网络、安全组、密钥和实例名称。
1.在控制器节点上,获取演示凭据以访问仅限用户的CLI命令:

$ . demo-openrc

2.flavor指定了包括处理器,内存和存储的虚拟资源分配概要文件。
列出可用的flavor:

[root@controller ~]# openstack flavor list
+----+---------+------+------+-----------+-------+-----------+
| ID | Name    |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------+------+------+-----------+-------+-----------+
| 0  | m1.nano |   64 |    1 |         0 |     1 | True      |
| 1  | m2.nano | 1024 |   10 |         0 |     1 | True      |
+----+---------+------+------+-----------+-------+-----------+

3.列出镜像

[root@controller ~]# openstack image list
+--------------------------------------+--------+--------+
| ID                                   | Name   | Status |
+--------------------------------------+--------+--------+
| de140769-4ce3-4a1b-9651-07a915b21caa | cirros | active |
+--------------------------------------+--------+--------+

本实例使用cirros镜像
4.列出可用的网络

[root@controller ~]# openstack network list
+--------------------------------------+--------------+------------------------------
| ID                                   | Name         | Subnets                              |
+--------------------------------------+--------------+------------------------------
| 0e728aa4-d9bd-456b-ba0b-dd7df5e15c96 | selfservice1 | e96f6670-3208-40bb-a4f8-21beb37382db |
| 891787bb-ce4a-4b41-b222-1493ec30035c | provider     | 639dcbb3-9bdf-4db8-9734-c4556f1e7972 |
| ded70080-a8d2-41fc-8351-2cf2eb45c308 | selfservice2 | 5137d1b5-f697-4e9a-b0ea-8981e1f1dd3d |
+--------------------------------------+--------------+------------------------------

这个实例使用provider提供者网络。但是,您必须使用ID而不是名称来引用此网络。
如果您选择了选项2,则输出还应包含 selfservice自助服务网络。
5.列出可用的安全组:

[root@controller ~]# openstack security group list
+--------------------------------------+---------+------------------------+----------
| ID                                   | Name    | Description            | Project                          |
+--------------------------------------+---------+------------------------+----------
| 0b8e6943-af2e-4b16-9f06-da3ceb17e105 | default | Default security group | 07f75876b05945e0816b6e219ee6c9f7 |
+--------------------------------------+---------+------------------------+----------

此实例使用default安全组。
6.列出可用的秘钥对:

[root@controller ~]# openstack keypair list  
+-------+-------------------------------------------------+
| Name  | Fingerprint                                     |
+-------+-------------------------------------------------+
| mykey | 6a:2a:05:f3:a8:98:c8:23:65:00:92:6d:24:ee:60:f4 |
+-------+-------------------------------------------------+

4.7 创建实例

租户网络selfservice上创建实例:

$ openstack server create --flavor m1.nano --image cirros \
  --nic net-id=0e728aa4-d9bd-456b-ba0b-dd7df5e15c96 --security-group default \
  --key-name mykey selfservice-cirros

执行结果:

[root@controller ~]# openstack server create --flavor m1.nano --image cirros \
>   --nic net-id=0e728aa4-d9bd-456b-ba0b-dd7df5e15c96 --security-group default \
>   --key-name mykey selfservice1-cirros1
+-----------------------------+-----------------------------------------------+
| Field                       | Value                                         |
+-----------------------------+-----------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                        |
| OS-EXT-AZ:availability_zone |                                               |
| OS-EXT-STS:power_state      | NOSTATE                                       |
| OS-EXT-STS:task_state       | scheduling                                    |
| OS-EXT-STS:vm_state         | building                                      |
| OS-SRV-USG:launched_at      | None                                          |
| OS-SRV-USG:terminated_at    | None                                          |
| accessIPv4                  |                                               |
| accessIPv6                  |                                               |
| addresses                   |                                               |
| adminPass                   | tBi28a8xEiQ9                                  |
| config_drive                |                                               |
| created                     | 2018-06-14T06:59:03Z                          |
| flavor                      | m1.nano (0)                                   |
| hostId                      |                                               |
| id                          | 497b11d9-711d-467d-a1d0-acbebee22f6b          |
| image                       | cirros (de140769-4ce3-4a1b-9651-07a915b21caa) |
| key_name                    | mykey                                         |
| name                        | selfservice-cirros                          |
| progress                    | 0                                             |
| project_id                  | f3f0c1a2a9f74aa6ac030671f4c7ec33              |
| properties                  |                                               |
| security_groups             | name='f2a06fdc-0799-4289-b758-eeec43c16a55'   |
| status                      | BUILD                                         |
| updated                     | 2018-06-14T06:59:05Z                          |
| user_id                     | a331e97f2ac6444484371a70e1299636              |
| volumes_attached            |                                               |
+-----------------------------+-----------------------------------------------+

参数说明:
openstack server create 创建实例
–flavor 主机类型名称
–image 镜像名称
–nic net-id=网络ID
–security-group 安全组名称
–key-name key名称
最后一个是自定义实例名称
检查实例状态:

[root@controller ~]# openstack server list
+--------------------------------------+----------------------+--------+-------------
| ID                                   | Name                 | Status | Networks                | Image  | Flavor  |
+--------------------------------------+----------------------+--------+-------------
| 23f3ac0c-4e6f-46be-8a3d-9e291343b441 | selfservice2-cirros2 | ACTIVE | selfservice2=172.16.2.6 | cirros | m1.nano |
| 31f2e6ac-3743-43f6-b6b2-49e22d63373b | selfservice2-cirros1 | ACTIVE | selfservice2=172.16.2.9 | cirros | m1.nano |
| 025d22b8-fd71-4eae-912e-6f54ee4e956f | selfservice1-cirros2 | ACTIVE | selfservice1=172.16.1.5 | cirros | m1.nano |
| 497b11d9-711d-467d-a1d0-acbebee22f6b | selfservice1-cirros1 | ACTIVE | selfservice1=172.16.1.8 | cirros | m1.nano |
+--------------------------------------+----------------------+--------+-------------

目前实例地址无法ping通

4.8 虚拟控制台访问实例

加载demo-openrc环境

$ . demo-openrc

为您的实例获取虚拟网络计算(VNC)会话URL并从Web浏览器访问它:

[root@controller ~]# openstack console url show selfservice-cirros
+-------+----------------------------------------------------------------------------
| Field | Value                                                                           |
+-------+----------------------------------------------------------------------------
| type  | novnc                                                                           |
| url   | http://controller:6080/vnc_auto.html?token=5dcf9b77-297c-4d6d-9445-f6ba77cb8a18 |
+-------+----------------------------------------------------------------------------

如果您的Web浏览器在无法解析控制器主机名的主机上运行,则可以使用控制节点上的管理接口的IP地址替换控制器。

测试实例对外网的访问:

   ping 172.16.1.1    租户网络网关
   ping 192.168.92.2  本地外部网络网关
   ping www.baidu.com 外部互联网

测试全部能够正常ping通

CirrOS映像包含传统的用户名/密码认证,并在登录提示符处提供这些凭据。 登录到CirrOS后,我们建议您使用ping验证网络连接。默认用户名cirros,默认密码gocubsgo

4.9 为实例分配浮动IP地址

如果想通过外网远程连接到实例,需要在外部网络上创建浮动IP地址,并将浮动ip地址关联到实例上,然后通过访问外部的浮动ip地址来访问实例:
1.在外部网络上生成浮动ip地址:

[root@controller ~]# openstack floating ip create provider
+---------------------+------------------------------------------------------------------------------------                                              ----------------------------------------------------------------------------------------------------+
| Field               | Value                                                                                                                                                                                                                                |
+---------------------+------------------------------------------------------------------------------------                                              ----------------------------------------------------------------------------------------------------+
| created_at          | 2020-11-29T15:18:38Z                                                                                                                                                                                                                 |
| description         |                                                                                                                                                                                                                                      |
| dns_domain          | None                                                                                                                                                                                                                                 |
| dns_name            | None                                                                                                                                                                                                                                 |
| fixed_ip_address    | None                                                                                                                                                                                                                                 |
| floating_ip_address | 192.168.92.87                                                                                                                                                                                                                        |
| floating_network_id | a8204752-7bb3-480a-a552-b47583d8d21f                                                                                                                                                                                                 |
| id                  | f098d2ff-f7ce-4114-a144-9b82ab83e428                                                                                                                                                                                                 |
| location            | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'mypr                                              oject', 'id': u'6ec6bbfd97fd4fc0a02df4837216b432'}), 'cloud': '', 'region_name': '', 'zone': None}) |
| name                | 192.168.92.87                                                                                                                                                                                                                        |
| port_details        | None                                                                                                                                                                                                                                 |
| port_id             | None                                                                                                                                                                                                                                 |
| project_id          | 6ec6bbfd97fd4fc0a02df4837216b432                                                                                                                                                                                                     |
| qos_policy_id       | None                                                                                                                                                                                                                                 |
| revision_number     | 0                                                                                                                                                                                                                                    |
| router_id           | None                                                                                                                                                                                                                                 |
| status              | DOWN                                                                                                                                                                                                                                 |
| subnet_id           | None                                                                                                                                                                                                                                 |
| tags                | []                                                                                                                                                                                                                                   |
| updated_at          | 2020-11-29T15:18:38Z                                                                                                                                                                                                                 |
+---------------------+------------------------------------------------------------------------------------                                              ----------------------------------------------------------------------------------------------------+

2.将浮动IP地址与实例关联

$ openstack server add floating ip selfservice-cirros 192.168.92.87

3.检查浮动IP地址的关联状态

[root@controller ~]# openstack server list
+--------------------------------------+--------------------+--------+-----------------------------------------+--------+---------+
| ID                                   | Name               | Status | Networks                                | Image  | Flavor  |
+--------------------------------------+--------------------+--------+-----------------------------------------+--------+---------+
| 6888d013-6d41-4c58-b998-b0ffba63ad7a | selfservice-cirros | ACTIVE | selfservice=172.16.1.237, 192.168.92.87 | cirros | m1.nano |
+--------------------------------------+--------------------+--------+-----------------------------------------+--------+---------+

依次生成多个浮动ip对其他实例进行相同操作
3.在控制节点或者外部网络上通过floating IP验证对实例的访问是否正常:
通过来自控制器节点或供应商物理网络上的任何主机的浮动IP地址验证与实例的连接性:
控制节点测试:

[root@controller ~]# ping 192.168.92.87
PING 192.168.92.87 (192.168.92.87) 56(84) bytes of data.
64 bytes from 192.168.92.87: icmp_seq=1 ttl=63 time=1.27 ms

本地主机测试:

C:\Users\Shaun>ping 192.168.92.87

正在 Ping 192.168.92.87 具有 32 字节的数据:
来自 192.168.92.87 的回复: 字节=32 时间=1ms TTL=63
来自 192.168.92.87 的回复: 字节=32 时间<1ms TTL=63

4.10 远程SSH访问实例

通过控制节点或者远程主机登录实例:

[root@controller ~]# ssh [email protected]
ssh: connect to host 192.168.92.86 port 22: No route to host
[root@controller ~]# ssh [email protected]
The authenticity of host '192.168.92.87 (192.168.92.87)' can't be established.
ECDSA key fingerprint is SHA256:p5xB5W16qV3eUEj7+lTOfOb5pfKtziwakrUgL4UZHFU.
ECDSA key fingerprint is MD5:04:e6:0c:de:a3:d0:89:f5:23:e5:3e:d0:c0:8d:4f:54.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.92.87' (ECDSA) to the list of known hosts.
$ ifconfig
eth0      Link encap:Ethernet  HWaddr FA:16:3E:8B:FB:8F
          inet addr:172.16.1.237  Bcast:172.16.1.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe8b:fb8f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:131 errors:0 dropped:0 overruns:0 frame:0
          TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16696 (16.3 KiB)  TX bytes:15213 (14.8 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

$

4.11 网卡变化

创建好内部网络和实例之后,vxlan隧道就建立起来。系统会在控制节点创建一个vxlan 的VTEP,在计算节点创建一个vxlan的VTEP。
如下图,第一张为控制节点,创建vxlan1;第二张为计算节点创建也为vxlan1。这两个VTEP设备组成了vxlan隧道的两个端点。

[root@controller ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
brq91c96c8a-d5          8000.72e4438ffd76       no              tap21ce30f1-82
                                                        tap574d1241-0b
                                                        vxlan-1

[root@computer ~]# brctl show
bridge name     bridge id               STP enabled     interfaces
brq91c96c8a-d5          8000.ea362deff7b8       no              tap9164e5e0-6a
                                                        vxlan-1

通过查看计算节点上vxlan1的详细信息可以看到其连接ens33网卡。

[root@computer ~]# ip -d link show dev vxlan-1
8: vxlan-1:  mtu 1450 qdisc noqueue master brq91c96c8a-d5 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ea:36:2d:ef:f7:b8 brd ff:ff:ff:ff:ff:ff promiscuity 1
    vxlan id 1 dev ens33 srcport 0 0 dstport 8472 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx

4.12 块存储

4.12.1 创建一个卷

$ . demo-openrc

创建1GB大小的卷:

[root@controller ~]# openstack volume create --size 1 volume1
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2020-11-29T15:36:54.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 6650b0de-0a57-4252-aacd-e3c55c7e0603 |
| multiattach         | False                                |
| name                | volume1                              |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | 90786d01190b4cb180fde058f8426fb3     |
+---------------------+--------------------------------------+

在短时间内,卷状态应该从创建变为可用:

[root@controller ~]# openstack volume list
+--------------------------------------+---------+-----------+------+-------------+
| ID                                   | Name    | Status    | Size | Attached to |
+--------------------------------------+---------+-----------+------+-------------+
| e3d30f58-6c0a-4f13-8433-c274e014fc3f | volume1 | available |    1 |             |
+--------------------------------------+---------+-----------+------+-------------+

4.12.2 将卷添加到实例

[root@controller ~]# openstack volume list
+--------------------------------------+---------+--------+------+---------------------------------------------+
| ID                                   | Name    | Status | Size | Attached to                                 |
+--------------------------------------+---------+--------+------+---------------------------------------------+
| 6650b0de-0a57-4252-aacd-e3c55c7e0603 | volume1 | in-use |    1 | Attached to selfservice-cirros on /dev/vdb  |
+--------------------------------------+---------+--------+------+---------------------------------------------+

使用SSH访问您的实例,并使用fdisk命令来验证卷是否作为/dev/vdb块存储设备:

$ sudo fdisk -l
Disk /dev/vda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 127BB530-4FDD-4855-B653-77C39F7AE9C4

Device     Start     End Sectors  Size Type
/dev/vda1  18432 2097118 2078687 1015M Linux filesystem
/dev/vda15  2048   18431   16384    8M EFI System

Partition table entries are not in disk order.


Disk /dev/vdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

$ df -h
Filesystem                Size      Used Available Use% Mounted on
/dev                     19.2M         0     19.2M   0% /dev
/dev/vda1               978.9M     24.0M    914.1M   3% /
tmpfs                    23.2M         0     23.2M   0% /dev/shm
tmpfs                    23.2M     92.0K     23.1M   0% /run


$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part
vdb     253:16   0    1G  0 disk

您必须在设备上创建一个文件系统,并挂载它才能使用卷。

$ sudo fdisk /dev/vdb

Welcome to fdisk (util-linux 2.27).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xf12e0888.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-2097151, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-2097151, default 2097151):

Created a new partition 1 of type 'Linux' and of size 1023 MiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.


$ sudo partx -a /dev/vdb1
partx: /dev/vdb: error adding partition 1
$ sudo mkfs.ext4 /dev/vdb1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 261888 4k blocks and 65536 inodes
Filesystem UUID: 09d4ac6e-fe8a-4ccd-9967-26fd7fc1e970
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done


#这破系统挂在不了,得换个镜像.

返回以启动实例。

5 使用官方云镜像创建实例

在openstack中,glance负责image,即镜像相关的服务,镜像是一个已经打包好的文件,内置有操作系统和预先部署好的软件。基于image创建虚拟机,在openstack中是以backing file的形式创建的,即新建的虚拟机和镜像文件之间建立一个连接。
OpenStack 的 instance(实例,就是虚拟机/云主机) 是通过 Glance 镜像部署的,下载clould 镜像使用标准镜像。主流的Linux发行版都提供可以在 OpenStack 中直接使用的cloud镜像。

5.1 下载官方通用云镜像

1、执行环境变量

[root@controller ~]# . admin-openrc

2、centos官网下载qcow2格式的openstack镜像:

wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1811.qcow2c

官方链接:http://cloud.centos.org/centos/7/images
5.2 上传镜像到Glance

[root@controller ~]# openstack image create "CentOS7-image" \
--file CentOS-7-x86_64-GenericCloud-1811.qcow2c \
--disk-format qcow2 --container-format bare \
--public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                      |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 2d7aa865ec35b39d682c62dfe5d3037d                                                                                                                                                           |
| container_format | bare                                                                                                                                                                                       |
| created_at       | 2020-12-01T08:05:29Z                                                                                                                                                                       |
| disk_format      | qcow2                                                                                                                                                                                      |
| file             | /v2/images/e8f6d113-a283-40f3-b67c-c537da708dd6/file                                                                                                                                       |
| id               | e8f6d113-a283-40f3-b67c-c537da708dd6                                                                                                                                                       |
| min_disk         | 0                                                                                                                                                                                          |
| min_ram          | 0                                                                                                                                                                                          |
| name             | CentOS7-image                                                                                                                                                                              |
| owner            | 8c17b9e0e7ff4393b4b1a883ca8efffe                                                                                                                                                           |
| properties       | os_hash_algo='sha512', os_hash_value='82c5b738e2c6d0d7a096c4a68e41304e921647b5f4f8aedfb9228fca9b398492fa98c402a9c3ae7e08e079e887fb011d8214d4d8b06434422b2d0aa2bbfcc28b', os_hidden='False' |
| protected        | False                                                                                                                                                                                      |
| schema           | /v2/schemas/image                                                                                                                                                                          |
| size             | 414830592                                                                                                                                                                                  |
| status           | active                                                                                                                                                                                     |
| tags             |                                                                                                                                                                                            |
| updated_at       | 2020-12-01T08:05:31Z                                                                                                                                                                       |
| virtual_size     | None                                                                                                                                                                                       |
| visibility       | public                                                                                                                                                                                     |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

查看上传的镜像

[root@controller ~]# openstack image list
+--------------------------------------+---------------+--------+
| ID                                   | Name          | Status |
+--------------------------------------+---------------+--------+
| e8f6d113-a283-40f3-b67c-c537da708dd6 | CentOS7-image | active |
| e2fc9a86-f7e3-4598-848e-2d79cb060cc2 | cirros        | active |
+--------------------------------------+---------------+--------+

5.3 创建实例

OpenStack Stein版搭建详解_第10张图片

OpenStack Stein版搭建详解_第11张图片

5.4 为实例分配浮动IP地址

provider network只有admin用户有权使用。。目前所有的instance都是使用租户做的。

租户无权使用provider network。应选择self-server网络

分配浮动IP并关联

OpenStack Stein版搭建详解_第12张图片

在控制节点或者外部网络上通过floating IP验证对实例的访问是否正常:
通过来自控制器节点或供应商物理网络上的任何主机的浮动IP地址验证与实例的连接性:
控制节点测试:

[root@controller ~]# ping 192.168.92.88
PING 192.168.92.88 (192.168.92.88) 56(84) bytes of data.
64 bytes from 192.168.92.88: icmp_seq=1 ttl=63 time=0.876 ms
64 bytes from 192.168.92.88: icmp_seq=2 ttl=63 time=1.00 ms

本地主机测试:

C:\Users\Shaun>ping 192.168.92.88

正在 Ping 192.168.92.88 具有 32 字节的数据:
来自 192.168.92.88 的回复: 字节=32 时间=1ms TTL=63
来自 192.168.92.88 的回复: 字节=32 时间=1ms TTL=63

5.5 远程SSH访问实例

通过控制节点登录实例:

[root@controller ~]# ssh [email protected]
Last login: Tue Dec  1 08:50:44 2020 from 192.168.92.70
[centos@centos-1c-2g-20g ~]$ ifconfig
eth0: flags=4163  mtu 1450
        inet 172.16.1.84  netmask 255.255.255.0  broadcast 172.16.1.255
        inet6 fe80::f816:3eff:fe8f:a72b  prefixlen 64  scopeid 0x20
        ether fa:16:3e:8f:a7:2b  txqueuelen 1000  (Ethernet)
        RX packets 391  bytes 49032 (47.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 433  bytes 49605 (48.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 6  bytes 416 (416.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6  bytes 416 (416.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[centos@centos-1c-2g-20g ~]$ ping www.baidu.com
PING www.a.shifen.com (14.215.177.39) 56(84) bytes of data.
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=1 ttl=127 time=902 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=2 ttl=127 time=640 ms
64 bytes from 14.215.177.39 (14.215.177.39): icmp_seq=3 ttl=127 time=838 ms
^C
--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2749ms
rtt min/avg/max/mdev = 640.471/794.049/902.856/111.703 ms

修改实例root密码并开启SSH远程密码登录:

[centos@centos-1c-2g-20g ~]$ sudo su root
[root@centos-1c-2g-20g centos]# passwd root
Changing password for user root.
New password:
BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.
[root@centos-1c-2g-20g centos]# vi /etc/ssh/sshd_config 
     63 PasswordAuthentication yes
     64 #PermitEmptyPasswords no
     65 #PasswordAuthentication no
[root@centos-1c-2g-20g centos]# systemctl restart sshd
[root@centos-1c-2g-20g centos]# exit
exit
[centos@centos-1c-2g-20g ~]$ exit
logout
Connection to 192.168.92.88 closed.
[root@controller ~]#

使用root账号及设置的密码访问实例

6 查看当前网卡状态

6.1 控制节点

[root@controller ~]# nmcli connection show
NAME            UUID                                  TYPE      DEVICE
ens35           1777ed92-ff58-7956-b8b3-ed928f82e0c8  ethernet  ens35
brqa8204752-7b  14dc9fa6-0885-41ff-b75c-7dc6e372c2e4  bridge    brqa8204752-7b
ens33           c96bc909-188e-ec64-3a96-6a90982b08ad  ethernet  ens33
ens34           94aea789-efb3-ef4c-81b0-e8b18ecc9797  ethernet  ens34
brq91c96c8a-d5  81f8d6be-22e4-4163-a62b-d1bb93a3b76d  bridge    brq91c96c8a-d5
[root@controller ~]# ifconfig
brq91c96c8a-d5: flags=4163  mtu 1450
        inet6 fe80::941b:d1ff:fe1a:4218  prefixlen 64  scopeid 0x20
        ether 0e:9e:7b:99:37:1a  txqueuelen 1000  (Ethernet)
        RX packets 17  bytes 1492 (1.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 656 (656.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

brqa8204752-7b: flags=4163  mtu 1500
        inet 192.168.92.70  netmask 255.255.255.0  broadcast 192.168.92.255
        inet6 fe80::7015:3dff:fe0d:7e6f  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:5b  txqueuelen 1000  (Ethernet)
        RX packets 786  bytes 83755 (81.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 639  bytes 62110 (60.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.90.70  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::fc0d:675:2ad6:f897  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:47  txqueuelen 1000  (Ethernet)
        RX packets 32987  bytes 5037935 (4.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 602347  bytes 1657428387 (1.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens34: flags=4163  mtu 1500
        inet 192.168.91.70  netmask 255.255.255.0  broadcast 192.168.91.255
        inet6 fe80::1ddb:eab2:f3d4:2273  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:51  txqueuelen 1000  (Ethernet)
        RX packets 1248  bytes 307183 (299.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1080  bytes 153566 (149.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens35: flags=4163  mtu 1500
        inet6 fe80::a786:42fa:f068:a716  prefixlen 64  scopeid 0x20
        ether 00:0c:29:f2:f9:5b  txqueuelen 1000  (Ethernet)
        RX packets 974  bytes 96265 (94.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 968  bytes 201074 (196.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 179281  bytes 83766925 (79.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 179281  bytes 83766925 (79.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap21ce30f1-82: flags=4163  mtu 1450
        ether 0e:9e:7b:99:37:1a  txqueuelen 1000  (Ethernet)
        RX packets 1017  bytes 98278 (95.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1146  bytes 241306 (235.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap574d1241-0b: flags=4163  mtu 1450
        ether ae:ce:a5:8f:74:99  txqueuelen 1000  (Ethernet)
        RX packets 7  bytes 1222 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22  bytes 2116 (2.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap5aafdf3b-79: flags=4163  mtu 1500
        ether c2:1d:c8:6d:1c:8b  txqueuelen 1000  (Ethernet)
        RX packets 1019  bytes 226546 (221.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1246  bytes 121813 (118.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapaa8ecb85-65: flags=4163  mtu 1500
        ether de:53:92:c8:a1:0b  txqueuelen 1000  (Ethernet)
        RX packets 5  bytes 446 (446.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 367  bytes 39033 (38.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vxlan-1: flags=4163  mtu 1450
        ether 36:82:41:17:07:f0  txqueuelen 1000  (Ethernet)
        RX packets 1147  bytes 225310 (220.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1012  bytes 84196 (82.2 KiB)
        TX errors 0  dropped 7 overruns 0  carrier 0  collisions 0

6.2 计算节点

[root@computer ~]# nmcli connection show
NAME            UUID                                  TYPE      DEVICE
ens34           94aea789-efb3-ef4c-81b0-e8b18ecc9797  ethernet  ens34
ens32           152beb06-47c5-c5e8-95a9-385590654382  ethernet  ens32
ens33           c96bc909-188e-ec64-3a96-6a90982b08ad  ethernet  ens33
brq91c96c8a-d5  5a0df80c-3d27-49b1-8022-91ee4057ac64  bridge    brq91c96c8a-d5
tap38ee592a-3c  d3d5e5fe-064a-4181-8f0b-3ea3fe7d4f6b  tun       tap38ee592a-3c
[root@computer ~]# ifconfig
brq91c96c8a-d5: flags=4163  mtu 1450
        ether 1e:04:03:ef:c3:86  txqueuelen 1000  (Ethernet)
        RX packets 11  bytes 1228 (1.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens32: flags=4163  mtu 1500
        inet 192.168.90.71  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::64b7:7c1d:5771:f22a  prefixlen 64  scopeid 0x20
        ether 00:0c:29:4f:55:b3  txqueuelen 1000  (Ethernet)
        RX packets 181570  bytes 224238899 (213.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 76881  bytes 49041831 (46.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163  mtu 1500
        inet 192.168.91.71  netmask 255.255.255.0  broadcast 192.168.91.255
        inet6 fe80::d7ce:c82b:4a3e:61ba  prefixlen 64  scopeid 0x20
        ether 00:0c:29:4f:55:bd  txqueuelen 1000  (Ethernet)
        RX packets 1167  bytes 163088 (159.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1226  bytes 304624 (297.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens34: flags=4163  mtu 1500
        inet 192.168.92.71  netmask 255.255.255.0  broadcast 192.168.92.255
        inet6 fe80::6c69:4429:bec2:ad41  prefixlen 64  scopeid 0x20
        ether 00:0c:29:4f:55:c7  txqueuelen 1000  (Ethernet)
        RX packets 694  bytes 89092 (87.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 571  bytes 40242 (39.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 22321  bytes 1178004 (1.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22321  bytes 1178004 (1.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap38ee592a-3c: flags=4163  mtu 1450
        inet6 fe80::fc16:3eff:fe8f:a72b  prefixlen 64  scopeid 0x20
        ether fe:16:3e:8f:a7:2b  txqueuelen 1000  (Ethernet)
        RX packets 1151  bytes 241632 (235.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1018  bytes 99024 (96.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vxlan-1: flags=4163  mtu 1450
        ether 1e:04:03:ef:c3:86  txqueuelen 1000  (Ethernet)
        RX packets 1015  bytes 84488 (82.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1149  bytes 225462 (220.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

6.3 存储节点

[root@cinder ~]# nmcli connection show
NAME   UUID                                  TYPE      DEVICE
ens35  1777ed92-ff58-7956-b8b3-ed928f82e0c8  ethernet  ens35
ens33  c96bc909-188e-ec64-3a96-6a90982b08ad  ethernet  ens33
ens34  0a1f319c-08d6-49d1-addb-c3a6b07d5ee2  ethernet  --
[root@cinder ~]# ifconfig
ens33: flags=4163  mtu 1500
        inet 192.168.90.72  netmask 255.255.255.0  broadcast 192.168.90.255
        inet6 fe80::c155:716:1592:c9c3  prefixlen 64  scopeid 0x20
        ether 00:0c:29:55:28:a5  txqueuelen 1000  (Ethernet)
        RX packets 931598  bytes 1342396273 (1.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 191367  bytes 222941211 (212.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens35: flags=4163  mtu 1500
        inet 192.168.92.72  netmask 255.255.255.0  broadcast 192.168.92.255
        inet6 fe80::1a5c:9306:6e3f:89b2  prefixlen 64  scopeid 0x20
        ether 00:0c:29:55:28:b9  txqueuelen 1000  (Ethernet)
        RX packets 229  bytes 22092 (21.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 104  bytes 7562 (7.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 38024  bytes 1997730 (1.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 38024  bytes 1997730 (1.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

你可能感兴趣的:(Linux,openstack)