1、环境信息:
本次环境搭建使用VMware Workstation虚拟出3台CentOS7.6虚拟机
节点架构:1个controller节点、1个compute节点、1个cinder块存储节点。
硬件配置信息具体如下:
节点名称 ------------CPU ----- 内存 ----- 操作系统----- 磁盘
controller节点 ----- 4C-------- 4GB ----- CentOS7.6----- 40GB
compute节点 ----- 4C ----- 4GB ----- CentOS7.6----- 40GB
cinder节点 ------- 4C ----- 4GB---- CentOS7.6----- 40GB系统盘,20GB存储盘
2、网络规划说明:
控制节点、计算节点与存储节点都配置2块网卡。
这里需要特别注意下,compute节点和cinder节点的第一块网卡是用于连接互联网安装部署Oenstack依赖软件包,如果你已经在本地搭建了openstack的yum源,这块网卡可以不需要配置的。
管理网络配置为仅主机模式,官方解释通过管理网络访问互联网安装软件包,如果搭建的有内部yum源,管理网络是不需要访问互联网的,配置成hostonly模式也是可以的。这里我的管理网与本地网(外部网络)是复用。
隧道网络配置为仅主机模式,因为隧道网络不需要访问互联网,仅用来承载openstack内部租户的网络流量。
外部网络配置为NAT模式,控制节点的外部网络主要是实现openstack租户网络对外网的访问,另外openstack软件包的部署安装也走这个网络,这里外部网络与管理网复用。
具体节点网络规划:
针对这三种网络的说明:
(1)管理网络(management/API网络):
该网络是提供系统管理功能,用于节点之间各组件内部通信及对数据库服务的访问,集群中所有节点都需要连接到管理网络,它同时承载着API网络的流量,Openstack各组件通过API网络向用户提供API服务。
(2)隧道网络(tunnel网络或self-service网络):
该网络提供租户虚拟网络(VXLAN或是GRE)。隧道网络采用点到点通信协议,在openstack里,这个tunnel就是虚拟机用于走网络数据流量使用的。
(3)外部网络(external网络或provider网络):
openstack网络至少要包括一个外部网络,这个网络能够访问OpenStack安装环境之外的网络,并且非openstack环境中的设备能够访问openstack外部网络的某个IP。另外外部网络为OpenStack环境中的虚拟机提供浮动IP,实现openstack外部网络对内部虚拟机实例的访问。
网络架构图示(本次未配置对象存储swift,网络存在复用,请根据实际情况配置)
备注:未特别说明则指令均需要在所有节点操作
3、构建基础环境
3.1操作关闭firewalld
[root@ ~]# systemctl stop firewalld && systemctl disable firewalld
3.2 修改selinux
[root@ ~]# setenforce 0
[root@ ~]# sed -i 's/enforcing/disabled/g' /etc/selinux/config
3.3 关闭NetworkManager
[root@ ~]# systemctl stop NetworkManager && systemctl disable NetworkManager
3.4 改/etc/hosts
[root@ ~]# vim /etc/hosts
192.168.1.83 controller
192.168.1.85 compute
192.168.1.84 cinder
3.5 安装ntp服务
控制节点:
[root@controller ~]# yum -y install chrony
[root@controller ~]# vim /etc/chrony.conf
allow 192.168.1.0/24
[root@controller ~]# systemctl enable chronyd.service && systemctl restart chronyd.service
compute节点使用与主机名controller配置时间同步:
[root@compute ~]# yum -y install chrony
[root@compute ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
[root@compute ~]# systemctl enable chronyd.service && systemctl restart chronyd.service
cinder节点使用与主机名controller配置时间同步:
[root@cinder ~]# yum -y install chrony
[root@cinder ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
[root@cinder ~]# systemctl enable chronyd.service && systemctl restart chronyd.service
验证:
控制节点:
[root@controller ~]# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^+ ntp1.ams1.nl.leaseweb.net 2 6 25 2 -29ms[ -29ms] +/- 192ms
^* a.chl.la 2 6 33 3 -321us[ +18ms] +/- 135ms
^- time.cloudflare.com 3 6 15 2 -4129us[-4129us] +/- 120ms
^- ntp6.flashdance.cx 2 6 35 2 -20ms[ -20ms] +/- 185ms
计算节点:
[root@compute ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* controller 3 6 17 5 -112us[-1230us] +/- 137ms
存储节点:
[root@cinder ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* controller 4 5 11 8 -119us[-1350us] +/- 140ms
3.6 配置openstack源
[root@ ~]# yum -y install epel-release
修改为国内epel源
[root@ ~]# cat /etc/yum.repos.d/epel.repo
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch/debug
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/SRPMS
#mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1
[root@ ~]## yum makecache fast
安装queens版yum源
[root@ ~]## yum -y install centos-release-openstack-queens
[root@ ~]## vim /etc/yum.repos.d/CentOS-OpenStack-queens.repo
替换成国内源
[centos-openstack-queens]
name=CentOS-7 - OpenStack queens
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/7/cloud/x86_64/openstack-queens/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
exclude=sip,PyQt4
安装ceph源并替换成国内源
[root@ ~]## vim /etc/yum.repos.d/CentOS-Ceph-Luminous.repo
[centos-ceph-luminous]
name=CentOS-$releasever - Ceph Luminous
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-luminous/el7/x86_64/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
重新加载生成缓存
[root@ ~]# yum makecache fast
[root@ ~]# yum -y install openstack-selinux
[root@ ~]# yum -y install python-openstackclient
3.7 安装SQL database
控制节点:
[root@controller ~]# yum -y install mariadb mariadb-server python2-PyMySQL
修改配置文件:
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 192.168.1.83
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 8192
collation-server = utf8_general_ci
character-set-server = utf8
[root@controller ~]# egrep -v "^#|^$" /etc/my.cnf.d/openstack.cnf
服务启动并加入开启启动:
[root@controller ~]# systemctl enable mariadb.service && systemctl start mariadb.service
初始化mysql的设置
[root@controller ~]# mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none): 回车
Enter current password for root (enter for none): 回车
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password: openstack
Re-enter new password: openstack
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
3.8 安装消息队列
控制节点:
[root@controller ~]# yum -y install rabbitmq-server
服务启动并加入开启启动:
[root@controller ~]# systemctl enable rabbitmq-server.service && systemctl start rabbitmq-server.service
安装rabbitmq插件
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
添加openstack用户:
[root@controller ~]# rabbitmqctl add_user openstack openstack
为openstack用户添加读写权限:
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
[root@controller ~]# rabbitmqctl set_user_tags openstack administrator
[root@controller ~]# ss -tunl | grep 5672
浏览器访问,验证服务
http://192.168.1.83:15672
3.9 安装memcached
控制节点:
[root@controller ~]# yum -y install memcached python-memcached
编辑配置文件:
[root@controller ~]# vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 127.0.0.1,::1,controller"
[root@controller ~]# egrep -v "^#|^$" /etc/sysconfig/memcached
服务启动并设置开启启动
[root@controller ~]# systemctl enable memcached.service && systemctl start memcached.service
3.10 安装etcd服务
控制节点:
[root@controller ~]# yum -y install etcd
修改配置文件:
[root@controller ~]# vim /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.83:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.1.83:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.83:2380" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.83:2379" ETCD_INITIAL_CLUSTER="controller=http://192.168.1.83:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" ETCD_INITIAL_CLUSTER_STATE="new"
[root@controller ~]# egrep -v "^#|^$" /etc/etcd/etcd.conf
服务启动并设置开机启动
[root@controller ~]# systemctl enable etcd && systemctl start etcd
基础环境就安装到这里,下一章节开始安装keystone