Install and Configure OpenStack Compute Service (Nova)

Based on OpenStack Icehouse release

wKioL1M44SPyTsjlAAF0LagR93g265.jpg

wKioL1NV_gWQOVT_AAHybhBEkw0678.jpg

nova controller node setup

1. Install and Configure OpenStack Compute Service (Nova)

yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient


mysql -uroot -p
mysql> create database nova;
mysql> grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA-DBPASS';
mysql> grant all privileges on nova.* to 'nova'@'%' identified by 'NOVA-DBPASS';
mysql> flush privileges;


vi /etc/nova/nova.conf
[database]
connection=mysql://nova:nova@MYSQL-SERVER/nova

nova-manage db sync

vi /etc/nova/nova.conf

my_ip=192.168.1.10
auth_strategy=keystone
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
vncserver_listen=192.168.1.10
vncserver_proxyclient_address=192.168.1.10
auth_host=controller
auth_port=35357
auth_protocol=http

auth_uri=http://controller:5000

admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service


# add nova user (set in service tenant))
keystone user-create --tenant service --name nova --pass NOVA-USER-PASSWORD

# add nova user in admin role
keystone user-role-add --user nova --tenant service --role admin


# add service for nova
keystone service-create --name=nova --type=compute --description="Nova Compute Service"


# add endpoint for nova
keystone endpoint-create --region RegionOne --service nova --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s


chown -R nova:nova /etc/nova /var/log/nova

for service in api cert consoleauth scheduler conductor novncproxy; do

service openstack-nova-$service start

chkconfig openstack-nova-$service on

done

to check mounted nova computer node:

nova-manage service list

wKioL1MpCaKDDyCiAAHWwZSZtZo010.jpg

nova image-list


nova computer node setup

1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on


disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.20.0/24), it's recommended to use seperated nic for management network


vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none


3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1


4. yum -y install qemu-kvm libvirt python-virtinst bridge-utils  
# make sure modules are loaded
lsmod | grep kvm

service libvirtd start; chekconfig libvirtd on
service messagebus start; chkconfig messagebus on

5. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

6. yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python openstack-utils

7. yum install -y openstack-nova-compute


8. vi /etc/nova/nova.conf
connection=mysql://nova:NOVA-DATABASE-PASSWORD@MYSQL-SERVER/nova
auth_strategy=keystone
auth_host=controller
auth_port=35357
auth_protocol=http

auth_uri=http://controller:5000

admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
my_ip=192.168.1.11
vnc_enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.11
novncproxy_base_url=http://controller:6080/vnc_auto.html
glance_host=controller


9. egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, do nothiong
If this command returns a value of zero, set libvirt_type=qemu in nova.conf

10. chown -R nova:nova /etc/nova /var/log/nova

service openstack-nova-compute start; chkconfig openstack-nova-compute on


on controller node to check node1 status:

nova-manage service list

wKioL1Ny4RPR7VoYAAHO-SkPtzc135.jpg


Now for legacy FlatDHCP networking:

# on controller node

vi /etc/nova/nova.conf

network_api_class=nova.network.api.API
security_group_api=nova

service openstack-nova-api restart

service openstack-nova-scheduler restart

service openstack-nova-conductor restart


11. yum -y install openstack-nova-network openstack-nova-api


12. vi /etc/nova/nova.conf

network_api_class=nova.network.api.API
security_group_api=nova

network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_interface=eth1
flat_network_bridge=br100
public_interface=eth0

#auto_assign_floating_ip=True


Notes:

By default, all the VMs in the “flat” network can see one another regardless of which tenant they belong to. "allow_same_net_traffic=false",this configures IPtables policies to prevent any traffic between instances (even inside the same tenant), unless it is unblocked in a security group.


13. service openstack-nova-metadata-api start; chkconfig openstack-nova-metadata-api on
service openstack-nova-network start; chkconfig openstack-nova-network on

14. on controller

# create flat network

source ~/adminrc

demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`


nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --bridge br100 --multi-host T --project-id $demo1

Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4


Now for legacy VLAN networking:


there is a bug for vlan, to fix it, on nova controller and all compute nodes:
vi /usr/lib/python2.6/site-packages/nova/network/manager.py
# line 1212        
vlan = kwargs.get('vlan_start', None)

reboot


# on controller node

vi /etc/nova/nova.conf

network_api_class=nova.network.api.API
security_group_api=nova

service openstack-nova-api restart

service openstack-nova-scheduler restart

service openstack-nova-conductor restart


11. yum -y install openstack-nova-network openstack-nova-api


12. vi /etc/nova/nova.conf

network_api_class=nova.network.api.API
security_group_api=nova
network_manager=nova.network.manager.VlanManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=false
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
vlan_start=100
vlan_interface=eth1
public_interface=eth0
#auto_assign_floating_ip=True


13. service openstack-nova-metadata-api start; chkconfig openstack-nova-metadata-api on
service openstack-nova-network start; chkconfig openstack-nova-network on

14. on controller

# create vlan network

source ~/adminrc

# Normally: one subnet --> one vlan id --> one secuiry group

demo1=`keystone tenant-list | grep demo1 | awk '{ print $2 }'`


nova network-create vmnet1 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.10.0/24 --vlan 100 --multi-host T --project-id $demo1

Notes:dns1 and dns2 are public dns server, using any private networking for fixed-range-v4


keystone tenant-create --name=demo2 --description="Demo2 Tenant"
demo2=`keystone tenant-list | grep demo2 | awk '{ print $2 }'`
nova network-create vmnet2 --dns1 210.22.84.3 --dns2 210.22.70.3 --fixed-range-v4 10.10.11.0/24 --vlan 110 --multi-host T --project-id $demo2


Launch Instances

source ~/demo1rc

nova secgroup-list

# create test-sec group

nova secgroup-create test-sec "Test Security Group"

# permit ssh
nova secgroup-add-rule test-sec tcp 22 22 0.0.0.0/0

# permit ICMP
nova secgroup-add-rule test-sec icmp -1 -1 0.0.0.0/0

nova secgroup-list-rules test-sec


nova keypair-add demokey > demokey.pem
nova keypair-list


nova flavor-list
nova image-list


source ~/adminrc to run below commands:

nova network-list
nova-manage network list

vmnet1=`nova network-list | grep vmnet1 | awk '{ print $2 }'`


source ~/demo1rc

nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$vmnet1 CirrOS


1. you can use vmware workstation to build images, then upload to glance using dashboard


wKioL1OXCpnAUwGuAAKp6mYHmB4385.jpg

ubuntu
1). vi /etc/hosts to remove 127.0.1.1. item
2). enable ssh login
3). enable dhcp client on interface
4). enable normal username/password
5). set root password

centos/redhat
1). rm -rf /etc/ssh/ssh_host_*
2). vi /etc/sysconfig/network-scripts/ifcfg-ethX to remove HWADDR and UUID items
3). rm -rf /etc/udev/rules.d/70-persistent-net.rules
4). enable ssh login
5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

6). enable normal username/password
7). set root password

2. launch instance without keypair


nova commands:

nova list; nova show CirrOS

nova stop CirrOS
nova start CirrOS


# get vnc console address via web browser:

wKiom1OX1J2ABLXjAAEv9d-QETE762.jpg

nova get-vnc-console CirrOS novnc


# create floating network

nova-manage floating create --ip_range 192.168.1.248/29

Notes: floating ip will use eth0 public related ip range
nova-manage floating list


# Associate the floating IP address with your instance even it's running

nova floating-ip-associate CirrOS 192.168.1.249

( nova floating-ip-disassociate cirros 192.168.1.249 )

nova list


ping 192.168.1.249 (floating ip)
using xshell or putty to ssh -i demokey.pem [email protected]  (username: cirros, password: cubswin:))

[ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]

now we can ping and ssh to 192.168.1.249, and CirrOS can access Internet now.


Notes: you should have enough space in /var/lib/nova/instances for store VMs, you can mount partition to it ( using local or shared storages).


Access novnc console from Internet method1

1. add another interface face to Internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service openstack-nova-compute restart


4. nova get-vnc-console CirrOS novnc

http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673


Access novnc console from Internet method2

1. you can publish dashboard web site to Internet (normally keystone+dashboard node)

wKioL1Ob-b7xzdoWAAH7Q80Qqtc638.jpg


2. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service openstack-nova-compute restart


3. nova get-vnc-console CirrOS novnc

http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673


你可能感兴趣的:(nova)