I succeeded to build three-node OpenStack cluster with RHOS(Red Hat OpenStack) Folsom Preview. I drop some notes on how I set them up here.
Please see the following announcement for how you can get the Preview subscription.
Other related documents.
Here is the physical network connection. In production, you must separate Public NW and Management NW.
[Public/Management NW] 10.0.1.0/24 | |-------------------------------- | | |em1 | ------ | |opst01| | ------ | |em2 | | | [Private NW] | | | |--------------- | |em2 |em2 | ------ ------ | |opst02| |opst03| | ------ ------ | |em1 |em1 | | | | ---------------------------------
The following components are placed in each server (opst01-opst03)
opst01 works as the network gateway node. Quantum's LinuxBridge plugin is used to create the following logical network.
[pulic01] <-- Public(External) NW: 10.0.1.0/24 | | <---- SNAT/Floating IP translation is done here. -------- |router01| -------- | [net01] <-- Private NW: 192.168.101.0/24 | |----------... | | [vm01] [vm02] ...
Due to the RHOS Folsom Preview's (or precisely, RHEL6.3's) limitation, network namespace is disabled. Hence, only one virtual router can be used, and duplicate IP addresses are not supported.
Note that opst01 needs to have at least one free disk partition for Cinder's volume group. I use /dev/sda3 for that here.
I suppose that you've already registered your RHEL6.3 servers to RHOS Folsom Preview subscription, and enabled the Folsom channel as follows.
# yum-config-manager --enable rhel-server-ost-6-folsom-rpms # yum-config-manager --disable rhel-server-ost-6-preview-rpms
Here's the common /etc/hosts entries.
/etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.0.1.251 opst01 10.0.1.252 opst02 10.0.1.253 opst03
These IP's are assigned to em1 of each server. em2 is just up without IP's. (Set BOOTPROTO="none" in ifcfg-em2.) By the way, "emX" may stand for "ethX" of your server.
SELinux is set to "Permissive" as SELinux policy is under the development. And iptables is disabled for the sake of simplicity.
# chkconfig iptables off # service iptables stop
Don't forget to use ntpd for time sync.
As a workaround of this bug, take the following steps.
1. Check the following entries in /etc/sysctl.conf.
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
These should be 0 on the network node(opst01). On the other hand, change them to 1 on the compute nodes(opst02, opst03) if you use the security group functionality of Nova compute.
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-arptables = 1
2. Create /etc/sysconfig/modules/openstack-quantum-linuxbridge.modules as below.
#!/bin/sh modprobe -b bridge >/dev/null 2>&1 exit 0
3. Set permission.
# chmod ugo+x /etc/sysconfig/modules/openstack-quantum-linuxbridge.modules
4. Reboot the server to apply it.
# reboot
And finally, install the common pre-req packages.
# yum install openstack-utils dnsmasq-utils
Modify the following entry in /etc/sysctl.conf.
net.ipv4.ip_forward = 1
Install packages and create database tables.
# yum install openstack-keystone # openstack-db --init --service keystone mysql-server is not installed. Would you like to install it now? (y/n): y Loaded plugins: product-id, security, subscription-manager Updating certificate-based repositories. ... Complete! mysqld is not running. Would you like to start it now? (y/n): y Initializing MySQL database: Installing MySQL system tables... OK ... Since this is a fresh installation of MySQL, please set a password for the 'root' mysql user. Enter new password for 'root' mysql user: pas4mysql Enter new password again: pas4mysql Verified connectivity to MySQL. Creating 'keystone' database. Asking openstack-keystone to sync the database. Complete!
Note that mysql's root password "pas4mysql" is up to your choice. The same for other passwords in the rest of this guide.
# export SERVICE_TOKEN=$(openssl rand -hex 10) # export SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0 # mkdir /root/work # echo $SERVICE_TOKEN > /root/work/ks_admin_token # openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $SERVICE_TOKEN # chkconfig openstack-keystone on # service openstack-keystone start
Create admin user, role, and tenant. All of them is used for admin operations.
# keystone user-create --name admin --pass pas4admin # keystone role-create --name admin # keystone tenant-create --name admin # user=$(keystone user-list | awk '/admin/ {print $2}') # role=$(keystone role-list | awk '/admin/ {print $2}') # tenant=$(keystone tenant-list | awk '/admin/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Create /root/work/keystonerc_admin as below.
export OS_USERNAME=admin export OS_TENANT_NAME=admin export OS_PASSWORD=pas4admin export OS_AUTH_URL=http://127.0.0.1:35357/v2.0/ export PS1="[\u@\h \W(keystone_admin)]\$ "
And source it.
# unset SERVICE_ENDPOINT # unset SERVICE_TOKEN # . keystonerc_admin
Create service entry of Keystone itself.
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service" # service=$(keystone service-list | awk '/keystone/ {print $2}') # keystone endpoint-create --region RegionOne \ --service_id $service \ --publicurl 'http://opst01:5000/v2.0' \ --adminurl 'http://127.0.0.1:35357/v2.0' \ --internalurl 'http://127.0.0.1:5000/v2.0'
Add demonstration tenant "redhat" and its user "enakai".
# keystone user-create --name enakai --pass xxxxxxxx # keystone role-create --name user # keystone tenant-create --name redhat # user=$(keystone user-list | awk '/enakai/ {print $2}') # role=$(keystone role-list | awk '/user/ {print $2}') # tenant=$(keystone tenant-list | awk '/redhat/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Create /root/work/keystonerc_enakai as below.
export OS_USERNAME=enakai export OS_TENANT_NAME=redhat export OS_PASSWORD=xxxxxxxx export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ export PS1="[\u@\h \W(keystone_enakai)]\$ "
Install packages and create database tables.
# yum install openstack-glance # openstack-db --init --service glance Please enter the password for the 'root' MySQL user: pas4mysql ...
Modify config files and start start services.
# openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone # openstack-config --set /etc/glance/glance-api-paste.ini filter:authtoken admin_token $(cat /root/work/ks_admin_token) # openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone # openstack-config --set /etc/glance/glance-registry-paste.ini filter:authtoken admin_token $(cat /root/work/ks_admin_token) # chkconfig openstack-glance-registry on # chkconfig openstack-glance-api on # service openstack-glance-registry start # service openstack-glance-api start
Create service entry in Keystone.
# . keystonerc_admin # keystone service-create --name=glance --type=image --description="Glance Image Service" # service=$(keystone service-list | awk '/glance/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl http://opst01:9292/v1 \ --adminurl http://127.0.0.1:9292/v1 \ --internalurl http://127.0.0.1:9292/v1
Import the sample machine image of Fedora17.
# glance add name=f17-jeos is_public=true disk_format=qcow2 container_format=ovf copy_from=http://berrange.fedorapeople.org/images/2012-11-15/f17-x86_64-openstack-sda.qcow2 # glance show $(glance index | awk '/f17-jeos/ {print $1}') URI: http://opst01:9292/v1/images/34e0eaef-4672-40ff-82fd-a85350868368 Id: 34e0eaef-4672-40ff-82fd-a85350868368 Public: Yes Protected: No Name: f17-jeos Status: saving Size: 251985920 Disk format: qcow2 Container format: ovf Minimum Ram Required (MB): 0 Minimum Disk Required (GB): 0 Owner: 5e308a4f4a73488d9facbc3fb23c7d38 Created at: 2012-11-17T14:31:51 Updated at: 2012-11-17T14:31:52
Install packages and create database table.
# yum install openstack-cinder # openstack-db --init --service cinder Please enter the password for the 'root' MySQL user: pas4mysql ...
Use /dev/sda3 for Cinder's volume group. Choose appropriate partition in your environment.
# openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/cinder/api-paste.ini filter:authtoken admin_token $(cat /root/work/ks_admin_token) # pvcreate /dev/sda3 # vgcreate cinder-volumes /dev/sda3 # grep -q /etc/cinder/volumes /etc/tgt/targets.conf || sed -i '1iinclude /etc/cinder/volumes/*' /etc/tgt/targets.conf # chkconfig tgtd on # service tgtd start # chkconfig openstack-cinder-api on # chkconfig openstack-cinder-scheduler on # chkconfig openstack-cinder-volume on # service openstack-cinder-api start # service openstack-cinder-scheduler start # service openstack-cinder-volume start
Depending on the network configuration, you may need to set Cinder server's IP address.
# openstack-config --set /etc/cinder/cinder.conf DEFAULT iscsi_ip_address 10.0.1.251
Create service entry in Keystone.
# . keystonerc_admin # keystone service-create --name=cinder --type=volume --description="Cinder Volume Service" # service=$(keystone service-list | awk '/cinder/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl "http://opst01:8776/v1/\$(tenant_id)s" \ --adminurl "http://127.0.0.1:8776/v1/\$(tenant_id)s" \ --internalurl "http://127.0.0.1:8776/v1/\$(tenant_id)s"
Install packages and modify basic config entries.
# yum install openstack-nova # openstack-db --init --service nova # openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_token $(cat /root/work/ks_admin_token) # openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface em2 # openstack-config --set /etc/nova/nova.conf DEFAULT public_interface em1
Setting up QPID server.
# yum install qpid-cpp-server # sed -i -e 's/auth=.*/auth=no/g' /etc/qpidd.conf # chkconfig qpidd on # service qpidd start
Additional configs and start services.
# openstack-config --set /etc/nova/nova.conf DEFAULT volume_api_class nova.volume.cinder.API # openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata # chkconfig openstack-nova-api on # chkconfig openstack-nova-cert on # chkconfig openstack-nova-objectstore on # chkconfig openstack-nova-scheduler on # service openstack-nova-api start # service openstack-nova-cert start # service openstack-nova-objectstore start # service openstack-nova-scheduler start
Create service entry in Keystone
# . keystonerc_admin # keystone service-create --name=nova --type=compute --description="Nova Compute Service" # service=$(keystone service-list | awk '/nova/ {print $2}') # keystone endpoint-create --service_id $service \ --publicurl "http://opst01:8774/v1.1/\$(tenant_id)s" \ --adminurl "http://127.0.0.1:8774/v1.1/\$(tenant_id)s" \ --internalurl "http://127.0.0.1:8774/v1.1/\$(tenant_id)s"
Don't forget to source the appropriate keystonerc file before running quantum-*-setup on the network node.
# . keystonerc_admin
Install packages and create database tables.
# yum install openstack-quantum openstack-quantum-linuxbridge gedit # quantum-server-setup --plugin linuxbridge Quantum plugin: linuxbridge Plugin: linuxbridge => Database: quantum_linux_bridge Please enter the password for the 'root' MySQL user: pas4mysql Verified connectivity to MySQL. Please enter network device for VLAN trunking: em2 Would you like to update the nova configuration files? (y/n): y Configuration updates complete!
Modify the following line of /usr/lib/python2.6/site-packages/quantum/agent/linux/iptables_manager.py as a workaround of this bug.
272 # s = [('/sbin/iptables', self.ipv4)] 273 s = [('iptables', self.ipv4)]
Start Quantum's main service.
# chkconfig quantum-server on # service quantum-server start
Setup and start L2 agent (LinuxBridge plugin)
# quantum-node-setup --plugin linuxbridge Quantum plugin: linuxbridge Please enter the Quantum hostname: opst01 Would you like to update the nova configuration files? (y/n): y # openstack-config --set /etc/quantum/plugin.ini VLANS tenant_network_type vlan # openstack-config --set /etc/quantum/plugin.ini VLANS network_vlan_ranges physnet1,physnet2:100:199 # openstack-config --set /etc/quantum/plugin.ini LINUX_BRIDGE physical_interface em1,em2 # openstack-config --set /etc/quantum/plugin.ini LINUX_BRIDGE physical_interface_mappings physnet1:em1,physnet2:em2 # chkconfig quantum-linuxbridge-agent on # service quantum-linuxbridge-agent start
Setup and start DHCP agent.
# quantum-dhcp-setup --plugin linuxbridge Quantum plugin: linuxbridge Please enter the Quantum hostname: opst01 Configuration updates complete! # chkconfig quantum-dhcp-agent on # service quantum-dhcp-agent start
Setup and start L3 agent.
# quantum-l3-setup --plugin linuxbridge # chkconfig quantum-l3-agent on # service quantum-l3-agent start
Create service entry in Keystone.
# keystone service-create --name quantum --type network --description 'OpenStack Networking Service' # service=$(keystone service-list | awk '/quantum/ {print $2}') # keystone endpoint-create \ --service-id $service \ --publicurl "http://opst01:9696/" --adminurl "http://127.0.0.1:9696/" \ --internalurl "http://127.0.0.1:9696/" # keystone user-create --name quantum --pass pas4quantum # keystone tenant-create --name service # user=$(keystone user-list | awk '/quantum/ {print $2}') # role=$(keystone role-list | awk '/admin/ {print $2}') # tenant=$(keystone tenant-list | awk '/service/ {print $2}') # keystone user-role-add --user-id $user --role-id $role --tenant-id $tenant
Modify nova.conf and restart related services.
# openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_username quantum # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_password pas4quantum # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_tenant_name service # service openstack-nova-api restart # service openstack-nova-cert restart # service openstack-nova-objectstore restart # service openstack-nova-scheduler restart
Install and start service.
# yum install openstack-dashboard # setsebool httpd_can_network_connect on # chkconfig httpd on # service httpd start
I skipped the VNC gateway configuration...
Copy the admin token from opst01.
$ scp opst01:/root/work/ks_admin_token /root/work/
Install packages.
# yum install openstack-nova python-cinderclient
Modify configs. Some of the following entries may be unnecessary. I'm taking conservative settings.
# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT flat_interface em2 # openstack-config --set /etc/nova/nova.conf DEFAULT public_interface em1 # openstack-config --set /etc/nova/nova.conf DEFAULT volume_api_class nova.volume.cinder.API # openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis ec2,osapi_compute,metadata # openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname opst01 # openstack-config --set /etc/nova/nova.conf DEFAULT glance_api_servers opst01:9292 # openstack-config --set /etc/nova/nova.conf DEFAULT glance_host opst01 # openstack-config --set /etc/nova/nova.conf DEFAULT sql_connection mysql://nova:nova@opst01/nova # openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host opst01 # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user admin # openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password pas4admin # openstack-config --set /etc/nova/api-paste.ini filter:authtoken admin_token $(cat /root/work/ks_admin_token)
Add the following block at the bottom of /etc/libvirt/qemu.conf
clear_emulator_capabilities = 0 user = "root" group = "root" cgroup_device_acl = [ "/dev/null", "/dev/full", "/dev/zero", "/dev/random", "/dev/urandom", "/dev/ptmx", "/dev/kvm", "/dev/kqemu", "/dev/rtc", "/dev/hpet", "/dev/net/tun", ]
Start services.
# chkconfig libvirtd on # chkconfig messagebus on # service libvirtd start # service messagebus start # virsh net-destroy default # virsh net-autostart default --disable
Install packages and do basic setups.
# yum install openstack-quantum openstack-quantum-linuxbridge # quantum-node-setup --plugin linuxbridge Quantum plugin: linuxbridge Please enter the Quantum hostname: opst01 Would you like to update the nova configuration files? (y/n): y # cd /etc/quantum # ln -s /etc/quantum/plugins/linuxbridge/linuxbridge_conf.ini plugin.ini
Modify configs and start services.
# openstack-config --set /etc/quantum/plugin.ini VLANS tenant_network_type vlan # openstack-config --set /etc/quantum/plugin.ini VLANS network_vlan_ranges physnet1,physnet2:100:199 # openstack-config --set /etc/quantum/plugin.ini LINUX_BRIDGE physical_interface em1,em2 # openstack-config --set /etc/quantum/plugin.ini LINUX_BRIDGE physical_interface_mappings physnet1:em1,physnet2:em2 # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_username quantum # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_password pas4quantum # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT quantum_admin_auth_url http://opst01:35357/v2.0/ # chkconfig quantum-linuxbridge-agent on # chkconfig openstack-nova-compute on # service quantum-linuxbridge-agent start # service openstack-nova-compute start
The following steps should be done at opst01. _But_ during the following operations, maybe, opst01's public IP 10.0.1.251 will be temporarily unavailable due to the virtual router's IP (10.0.1.1) setup. So please login to opst01 from some back door. (For example, add temporary IP to em2:1 of opst01 and opst02, and login from opst02 to em2:1 of opst01.) If you separate Public NW and Management NW, you don't have to worry it.
Create private network using VLAN 101.
# cd /root/work # . keystonerc_admin # tenant=$(keystone tenant-list|awk '/redhat/ {print $2}') # quantum net-create --tenant-id $tenant net01 --provider:network_type vlan --provider:physical_network physnet2 --provider:segmentation_id 101 # quantum subnet-create --tenant-id $tenant --name subnet01 net01 192.168.101.0/24
Create public (external) network as a flat (non VLAN) one. Gateway IP should be the physical network's one.
# tenant=$(keystone tenant-list|awk '/service/ {print $2}') # quantum net-create --tenant-id $tenant public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True # quantum subnet-create --tenant-id $tenant --name pub_subnet01 --gateway 10.0.1.254 public01 10.0.1.0/24 --enable_dhcp False
Create virtual router.
# quantum router-create router01
Check the router's ID and set it in router_id of /etc/quantum/l3_agent.ini, and restart related services.
# quantum router-list | awk '/router01/ {print $2}' ccf3a044-56e9-40c3-85bf-eacc059606d6 # grep router_id /etc/quantum/l3_agent.ini router_id = ccf3a044-56e9-40c3-85bf-eacc059606d6 # service quantum-server restart # service quantum-linuxbridge-agent restart # service quantum-l3-agent restart # service quantum-dhcp-agent restart
Connect networks to the router.
# quantum router-gateway-set router01 public01 # quantum router-interface-add router01 subnet01
And, here's the final trick for a routing problem...
You may see the duplicate routing table entries for the private (VLAN) network as below.
# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.101.0 0.0.0.0 255.255.255.0 U 0 0 0 ns-416260ac-5f 192.168.101.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-55dcd739-f8 ...
qr-55dcd739-f8 is for the virtual router port, and ns-416260ac-5f is for the dnsmasq process. This works well for the first time. But once you reboot opst01, the order of them will be changed as below.
# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.101.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-55dcd739-f8 192.168.101.0 0.0.0.0 255.255.255.0 U 0 0 0 ns-416260ac-5f ...
Then dnsmasq becomes unable to work. As a workaround, you have to remove the virtual router entry by hand.
# route del -net 192.168.101.0/24 dev qr-55dcd739-f8
Or you can try my patches in this BZ.
Anyway, now you can launch VM's from Horizon dashboard. Note that floating IP assignments are not supported by Horizon now. You have to do it using quantum CLI's.
For example, once you launched a VM connecting to net01 (Private NW) from user enakai in tenant "redhat", create new floating IP.
# cd /root/work # . keystonerc_enakai # quantum floatingip-create public01 Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 10.0.1.2 | | floating_network_id | 634356a4-ce08-44f2-98da-812197f8bca9 | | id | 7b7e28c1-1f29-4a63-98ef-7e1d4c41f21b | | port_id | | | router_id | | | tenant_id | 5e308a4f4a73488d9facbc3fb23c7d38 | +---------------------+--------------------------------------+
Check the VM's port ID.
# quantum port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 9b3ee005-f827-445d-b8c5-2bd2f9fbedf6 | | fa:16:3e:64:38:f9 | {"subnet_id": "dd9dd15c-8d99-47f2-9135-014982f8ef95", "ip_address": "192.168.101.8"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
Associate the floating IP to the port.
# quantum floatingip-associate 7b7e28c1-1f29-4a63-98ef-7e1d4c41f21b 9b3ee005-f827-445d-b8c5-2bd2f9fbedf6 Associated floatingip 7b7e28c1-1f29-4a63-98ef-7e1d4c41f21b