How to Install Mirantis Fuel 5.1 Openstack wihceph
作者:@法不荣情 [原文链接] http://weibo.com/p/2304189cacdb3d0102v55r
本人刚开始接触openstack,对一切还不是很熟悉,刚开始时是使用rdo 快速部署单节点openstack,之后手动安装了次openstack,是安装文档来敲命令,有些地方又看不懂,非常麻烦,更别说部署一个多节点的openstack HA高可用环境了,还好openstack社区中,mirantis openstack出了Fuel这个工具,可以快速部署一套openstack。除了使用之前在vmware workstation 10上使用fuel5.0快速部署了openstack HA高可用,感觉还不错,很快就装好了一个openstack HA高可用的环境。 最近看到5.1版本的出来了,看了相关文档,现在来在实际物理环境中部署一套openstack HA环境,其中使用ceph作为统一存储,另外添加两个存储节点。
感谢罗勇老师等人的文档,写的很好,当然也感谢mirantis的贡献,以下是个人在部署过程中的一些记录,以此作为笔记,若有错误,还望指出。
1、关于mirantis
Mirantis,一家很牛逼的openstack服务集成商,他是社区贡献排名前5名中唯一一个靠软件和服务吃饭的公司(其他分别是Red Hat, HP, IBM,Rackspace)。相对于其他几个社区发行版,Fuel的版本节奏很快,平均每两个月就能提供一个相对稳定的社区版。
2、关于FUEL
Fuel 是一个为openstack端到端”一键部署“设计的工具,其功能含盖自动的PXE方式的操作系统安装,DHCP服务,Orchestration服务 和puppet 配置管理相关服务等,此外还有openstack 关键业务健康检查和log 实时查看等非常好用的服务。
FUEL5.1是基于icehouse版本的openstack,其中系统为centos6.5和Ubuntu12.04.4。
Fuel的优点如下:
· 节点的自动发现和预校验
· 配置简单、快速
· 支持多种操作系统和发行版,支持HA部署
· 对外提供API对环境进行管理和配置,例如动态添加计算/存储节点
· 自带健康检查工具
· 支持Neutron,例如GRE和namespace都做进来了,子网能配置具体使用哪个物理网卡等
Fuel的架构
图片来源于http://www.openstack.cn/p692.html
使用虚拟机采用fuel来部署openstack可以看这个文档,写的非常好,很详细
http://www.openstack.cn/p692.html
3、环境拓扑图
但在部署时因为是测试环境,所以网卡有限每个服务器只有两张网卡,所以只用到两台交换机,交换机是DELL PowerConnect 5448和DELL PowerConnect 5448。
4、交换机配置
配置所需要的VLAN(此处用到的VLAN有101和102),以及在交换机端口上开启流量控(flowcontrol),所有交换机包括Private, Management, Storage networks都需允许所需要的VLAN通过即在使用端口上配置为trunk模式,并允许VLAN。配置如下(其他交换机设备的配置可能会有所不同)
switch > enable
switch # configure
switch (config) #vlandatabase
switch (config)# vlan 101-102
switch (config) # interfacerange ethernet all
switch (config) # switchportmode trunk
switch (config) # switchporttrunk allowed vlan add all
如果交换机没有配置的话,在fuel网络验证的时候会出现问题。因为使用到了VLAN标记。
5、安装fuel master
这个就是单纯装系统在加点配置,如下图所示进入安装欢迎界面,按提示按“Tab”键可以修改ip信息,也可以将showmenu=no修改为showmenu=yes,然后回车进入详细配置界面,此处是使用默认安装,直接回车即可一步安装完成。
安装完成后的界面如下图所示
该界面提示了root用户登录的密码,以及fuel web登录的方式以及用户名和密码,使用网页登录界面如下所示
6、部署过程
6.1 新建openstack环境
使用用户名admin,密码admin登录后见如下图界面
点击“新建openstack环境”开始建立openstack环境,点击“前进”进入下一步;
输入openstack环境名车,选择openstack版本,此处其实是选择系统,因为openstack版本固定为icehouse版本了,点击“前进”进入下一步。
选择环境的部署模式,有HA多节点和openstack多节点两个模式,HA多节点需要至少3个控制节点来部署,此处选择“HA多节点”,点击“前进”进入下一步;
因为环境部署在物理机上,所以选择KVM,如果是在虚拟机上则选择QEMU,若是使用vCenter环境的话,则选择vCenter,点击“前进”进入下一步;
此处选择GRE网络模式,点击“前进”进入下一步;
后端存储选择“ceph”,此处要注意的是选择这个选项时,需要另外两个或两个以上节点作为存储节点,点击“前进”进入下一步;
附加服务,此处不选择使用,点击“前进”进入下一步;
点击“新建”,完成openstack环境的建立。
6.2 发现节点
此测试环境中使用两张网卡,不过最好是三张,且必须要有PXE功能,在BIOS中启动服务器的“虚拟技术”功能,且设置为从pxe网络启动。
从pxe启动后进入界面,默认会自动进入bootstrap启动,画面出现bootstrap login后,fuel web才会发现此节点
Fuel web发现节点时,提示如下
发现节点之后,接下来就是增加节点,进入刚创建的openstack环境,点击右上角的“增加节点”,然后勾选“controller”角色,在选择此角色的服务器,建议在这之前最好记好这么服务器的网卡的MAC地址,因为此处没办法判断那台服务器是哪台,或者可以这样处理,选择控制节点时,就是开启要作为控制节点的服务器至少三台从网络PXE启动,然后增加节点完成之后,在进行计算节点或存储节点服务器的选择
增加节点完之后,如下图所示,但状态是“等待增加”,下图是部署好的;
6.3 部署与配置
勾选某台服务器进行磁盘配置和网络配置
如下,磁盘配置,此处使用默认;
如下使用网络配置,更改如下;
接下来进入整个网络配置,点击 “网络”,设置如图所示
最后验证网络,如果在交换机环节没有配置好的话,此处会提示错误,如果强制部署的话,部署过程可能会产生错误。
点击“设置”,进行openstack设置和存储设置,其他保持默认
存储使用ceph
都设置完成之后,点击“部署变更”开始部署
部署完成之后如下,会提示web登录的信息
参考资料
1、 http://community.mellanox.com/docs/DOC-1474
This post shows how to set up and configure Mirantis Fuel ver. 5.1/5.1.1 (OpenStack Icehouse based on CentOS 6.5) to support Mellanox ConnectX-3 adapters to work in SR-IOV mode for the VMs on the compute nodes, and in iSER (iSCSI over RDMA) transport mode for the storage nodes.
Related references:
- MLNX-OS User Manual - (located at support.mellanox.com )
- Planning Guide — Mirantis OpenStack v5.1 | Documentation
- Reference Architectures — Mirantis OpenStack v5.1 | Documentation
- HowTo Configure 56GbE Link on Mellanox Adapters and Switches
- HowTo upgrade MLNX-OS Software on Mellanox switches
- Mirantis Fuel ISO Download page
- HowTo Configure iSER Block Storage for OpenStack Cloud with Mellanox ConnectX-3 Adapters
- Mellanox CloudX, Mirantis Fuel 5.1 Solution Guide
- Movie - HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters
Before reading this post, Make
sure
you are familiar with Mirantis Fuel 5.1/5.1.1 installation procedures.
It is also recommended to watch the movie HowTo Install Mirantis Fuel 5.1 OpenStack with Mellanox Adapters
frameborder="0" height="513" scrolling="no" src="https://www.youtube.com/embed/5Ga28Rp7K_I?wmode=transparent" width="684" style="margin: 0px; padding: 0px; border-width: 0px; font-weight: inherit; font-style: inherit; font-family: inherit; vertical-align: baseline;">
Setup Diagram:
Note
:
Besides the Fuel Master node, all nodes should be connected to all five networks.
Note: Server’s IPMI and the switches management interfaces wiring and configuration are out of scope.
You need to ensure that there is management access (SSH) to Mellanox Ethernet switch SX1036 to perform the configuration.
Setup BOM:
Fuel Master server |
1 |
DELL PowerEdge R620
- CPU: 2 x E5-2650 @ 2.00GHz
- MEM: 128 GB
- HD: 2 x 900GB SAS 10k in RAID-1
|
Cloud Controllers and Compute servers
- 3 x Controllers
- 3 x Computes
|
6 |
DELL PowerEdge R620
- CPU: 2 x E5-2650 @ 2.00GHz
- MEM: 128 GB
- HD: 2 x 900GB SAS 10k in RAID-1
- NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
|
Cloud Storage server |
1 |
Supermicro X9DR3-F
- CPU: 2 x E5-2650 @ 2.00GHz
- MEM: 128 GB
- HD: 24 x 6Gb/s SATA Intel SSD DC S3500 Series 480GB (SSDSC2BB480G4)
- RAID Ctrl: LSI Logic MegaRAID SAS 2208 with battery
- NIC: Mellanox ConnectX-3Pro VPI (MCX353-FCCT)
|
Admin (PXE) and Public switch |
1 |
1Gb switch with VLANs configured to support both networks |
Cloud Ethernet Switch |
1 |
Mellanox SX1036 40/56Gb 36 port Ethernet |
Cables |
|
16 x 1Gb CAT-6e for Admin (PXE) and Public networks 7 x 56GbE copper cables up to 2m (MC2207130-XXX) |
Note:
You can use Mellanox ConnectX-3 PRO EN (MCX313A-BCCT
) or Mellanox ConnectX-3 Pro VPI (MCX353-FCCT) adapter cards.
Storage server RAID Setup:
- 2 SSD drives in bays 0-1 configured in RAID-1 (Mirror): The OS will be installed on it.
- 22 SSD drives in bays 3-24 configured in RAID-10: The Cinder volume will be configured on the RAID drive.
Network Physical Setup:
- Connect all nodes to the Admin (PXE) 1GbE switch (preferably through the eth0 interface on board).
It is recommended to write the MAC address of the Controller and Storage servers to make Cloud installation easier (see Controller Node section below in Nodes tab).
- Connect all nodes to the Public 1GbE switch (preferably through the eth1 interface on board).
- Connect port #1 (eth2) of ConnectX-3 Pro to SX1036 Ethernet switch (Private, Management, Storage networks).
Note:
The interface names (eth0, eth1, p2p1, etc.) may vary between servers from different vendors.
Note: Port bonding is not supported when using SR-IOV over the ConnectX-3 adapter family.
Rack Setup Example:
Fuel Node:
Compute and Controller Nodes:
4. Configure the required VLANs and enable flow control on the Ethernet switch ports.
All related VLANs should be enabled on the 40/56GbE switch (Private, Management, Storage networks). On Mellanox switches, use the command flow below to enable VLANs (e.g. VLAN 1-100 on all ports).
Note: Refer to the MLNX-OS User Manual to get familiar with switch software (located at
support.mellanox.com
).
Note:
Before start using the Mellanox switch, it is recommended to upgrade the switch to the latest MLNX-OS version.
switch > enable
switch # configure terminal
switch (config) #
vlan 1-100
switch (config vlan 1-100) # exit
switch (config) # interface ethernet 1/1 switchport mode hybrid
switch (config) # interface ethernet 1/1 switchport hybrid allowed-vlan all
switch (config) # interface ethernet 1/2 switchport mode hybrid
switch (config) # interface ethernet 1/2 switchport hybrid allowed-vlan all
...
switch (config) # interface ethernet 1/36 switchport mode hybrid
switch (config) # interface ethernet 1/36 switchport hybrid allowed-vlan all
Flow control is required when running iSER (RDMA over RoCE - Ethernet). On Mellanox switches, run the following command to enable flow control on the switches (on all ports in this example):
switch (config) # interface ethernet 1/1-1/36 flowcontrol receive on force
switch (config) # interface ethernet 1/1-1/36 flowcontrol send on force
To save the configuration (permanently), run:
switch (config) # configuration write
Note:
Flow control (global pause) is normally enabled by default on the servers. If it is disabled, run:
# ethtool -A rx on tx on
6. If you are running 56GbE, follow this example to set the link between the servers to the switch
from 40GbE to 56GbE: HowTo Configure 56GbE Link on Mellanox Adapters and Switches.
Networks Allocation (Example)
The example in this post is based on the network allocation defined in this table:
Admin (PXE) |
10.20.0.0/24 |
N/A |
The network is used to provision and manage Cloud nodes via the Fuel Master. The network is enclosed within a 1Gb switch and has no routing outside. This is the default Fuel network. |
Management |
192.168.0.0/24 |
N/A |
This is the Cloud Management network. The network uses VLAN 2 in SX1036 over 40/56Gb interconnect. This is the default Fuel network. |
Storage |
192.168.1.0/24 |
N/A |
This network is used to provide storage services. The network uses VLAN 3 in SX1036 over 40/56Gb interconnect. This is the default Fuel network. |
Public and Neutron L3 |
10.7.208.0/24 |
10.7.208.1 |
Public network is used to connect Cloud nodes to an external network. Neutron L3 is used to provide Floating IP for tenant VMs. Both networks are represented by IP ranges within same subnet with routing to external networks. All Cloud nodes which have Public IP and HA functionality require an additional Virtual IP. For our example with 7 Cloud nodes we need 8 IPs in Public network range. Consider a larger range if you are planning to add more servers to the cloud later. In our build we will use range IP range 10.7.208.53 >> 10.7.208.76 for both Public and Neutron L3. IP allocation will be as follows:
- Fuel Master IP: 10.7.208.53
- Public Range: 10.7.208.54 >> 10.7.208.61 (used for physical servers)
- Neutron L3 Range: 10.144.254.62 >> 10.144.254.76 (used for Floating IP pool)
|
Install the Fuel Master via ISO Image:
- Boot Fuel Master Server from the ISO as a virtual CD (click here for the image).
- Press the key on the very first installation screen which says "Welcome to Fuel Installer" and update the kernel option from showmenu=no to showmenu=yes and hit Enter. It will now install Fuel and reboot the server.
- After the reboot, boot from the local disk. The Fuel menu window will start.
- Network setup:
- Configure eth0 - PXE (Admin) network interface.
Ensure the default Gateway entry is empty for the interface – the network is enclosed within the switch and has no routing outside.
Select Apply.
- Configure eth1 – Public network interface.
The interface is routable to LAN/internet and will be used to access the server.
Configure static IP address, netmask and default gateway on the public network interface.
Select Apply.
- PXE Setup
The PXE network is enclosed within the switch.
Do not make changes – proceed with defaults.
Press Check button to ensure no errors are found.
- Time Sync
Check NTP availability (e.g. ntp.org) via Time Sync tab on the left.
Configure NTP server entries suitable for your infrastructure.
Press Check to verify settings.
- Navigate to Quit Setup and select Save and Quit to proceed with the installation.
- Once the Fuel installation is done, you are provided with Fuel access details both for SSH and HTTP.
Access Fuel Web UI by http://10.7.208.53:8000. Use "admin" for both login and password.
OpenStack Environment:
Log into Fuel
- Open in WEB browser (for example: http://10.7.208.53:8000)
- Log into Fuel using "admin" for both login and password.
Creating a new OpenStack environment:
- Open a new environment in the Fuel dashboard. A configuration wizard will start.
- Configure the new environment wizard as follows:
- Name and Release
- Name: TEST
- Release: Icehouse on CentOS 6.5 (2014.1.1-5.1)
- Deployment Mode
- Compute
- Network
- Neutron with VLAN segmentation
- Storage Backend
- Cinder: Default
- Glance : Default
- Additional Services
- Finish
- When done, a new TEST environment will be created. Click on it and proceed with environment configuration.
Configuring the OpenStack Environment:
Settings Tab
Kernel parameters
If you wish to enable iSER or SR-IOV, add this to the list of kernel parameters:
intel_iommu=on.
Mellanox Neutron component
To work with SR-IOV mode, select
Install Mellanox drivers and Mellanox SR-IOV plugin
Note:
The default number of supported virtual functions (VF) is 16. If you want to have more vNICs available, please contact Mellanox Support.
Configure Storage
- To use high performance block storage, check ISER protocol for volumes (Cinder) in the Storage section.
Note: "Cinder LVM over iSCSI for volumes" should remain checked (default).
- Save the settings.
Public Network Assignment
- Make sure Assign public network to all nodes is checked
Nodes Tab
Servers Discovery by Fuel
This section will assign Cloud roles to servers.
First of all, servers should be discovered by Fuel. For this to happen, make sure the servers are configured for PXE boot over Admin (PXE) network.
When done, reboot the servers and wait for them to be discovered.
Discovered nodes will be counted in top right corner of the Fuel dashboard.
Now you may add UNALLOCATED NODES to the setup.
First you may add Controller, Storage, and then Compute nodes.
Add Controller Nodes
- Click Add Node.
- Identify 3 controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network.
Assign the node's role to be a Controller node.
- Click Apply Changes.
Add Storage Node
- Click Add Node.
- Identify your controller node. Use the last 4 Hexa of its MAC address of interface connected to Admin (PXE) network.
In our example this is an only Supermicro server, so identification is easy.
Select this node to be a Storage - Cinder LVM node.
- Click Apply Changes.
Add Compute Nodes
- Click Add Node.
- Select all the nodes that are left and assign them the Compute role.
- Click Apply Changes.
Configure Interfaces
In this step, we will map each network to a physical interface for each node.
You can choose and configure multiple nodes in parallel.
Fuel will not let you to proceed with bulk configuration if HW differences between selected nodes (like the number of network ports) are detected.
In this case the Configure Interfaces button will have an error icon (see below).
The example below allows configuring 6 nodes in parallel. The 7th node (Supermicro storage node) will be configured separately.
- In this example, we set the Admin (PXE) network to eth0 and the Public network to eth1.
- The Storage, Private and Management networks should run on the ConnectX-3 adapters 40/56GbE port.
This is an example:
- Click Back To Node List and perform network configuration for Storage Node
Note:
Port bonding is not supported when using SR-IOV over ConnectX-3 Pro adapter family.
Configure Disks
There is no need to change the defaults for Controller and Compute nodes unless you are sure changes are required.
For the Storage node it is recommended to allocate only high performing RAID as Cinder storage. The small disk shall be allocated to Base System.
- Select Storage node
- Press Configure Disks button
- Click on sda disk bar, set Cinder allowed space to 0 MB and make Base System occupy the entire drive – press USE ALL ALLOWED SPACE.
- Press Apply.
Networks Tab
Public
Note:
In our example, Public network does not use VLAN. If you use VLAN for Public network You should check Use VLAN tagging and set proper VLAN ID.
Management
In this example, we select VLAN 2 for the management network. The CIDR left untouched.
Storage
In this example, we select VLAN 3 for the storage network. The CIDR is left untouched.
Neutron L2 Configuration
In this example, we set the VLAN range to 4-100. It should be aligned with the switch VLAN configuration (above).
The base MAC is left untouched.
Neutron L3 Configuration:
Floating IP range
: Configure it to be part of your Public network range, in this example, we select 10.7.208.62-10.7.208.76.
Internal Network:
Leave CIDR and Gateway with no changes.
Name servers:
Leave DNS servers with no changes.
Save Configuration
Click Save Settings at the bottom of page
Verify Networks
Click Verify Networks.
You should see the following message: Verification succeeded. Your network is configured correctly. Otherwise, check the log file for troubleshooting.
Deployment
Click the Deploy Changes button and view the installation progress at the nodes tab and view logs.
Health Test
- Click the Health Test tab.
- Check the Select All checkbox.
- Uncheck Platform services functional tests (image with special packages is required).
- Click Run Tests.
All tests should pass. Otherwise, check the log file for troubleshooting.
You can now safely use the cloud
Click the dashboard link at the top of the page.
Bug in Launchpad
Usernames and Passwords:
- Fuel server Dashboard user / password: admin / admin
- Fuel server SSH user / password: root / r00tme
- TestVM SSH user / password: cirros / cubswin:)
- To get controller node CLI permissions run: # source /root/openrc
Prepare Linux VM Image for CloudX:
In order to have network and RoCE support on the VM, MLNX_OFED (2.2-1 or later) should be installed on the VM environment.
MLNX_OFED may be downloaded from http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
(In case of CentOS/RHEL OS, you can use virt-manager to open existing VM image and perform MLNX_OFED installation).
Known Issues:
Issue #
|
Description
|
Workaround |
Bug in Launchpad
|
1
|
The default number of supported virtual functions (VFs),16, is not sufficient.
|
To have more vNICs available, contact Mellanox Support. |
|
2
|
Hypervisor crash on instance (VM) termination
|
Please contact Mellanox Support. |
|
3 |
56Gb links are discovered by Fuel as 10Gb |
No action is required. Actual port speed is 56Gb. After deployment ports are re-discovered as 56Gb. |