Install Configure OpenStack Network Service (Neutron)

Based on OpenStack Icehouse release

wKioL1M47gfC2jpnAAJ3kGMk9TQ971.jpg

wKiom1NV_mPQKedoAAKD9DIEwIg207.jpg


wKiom1M89xvC0ddKAAGRU14AgIM134.jpg

wKioL1NfHQazDv0XAAJq3e99fTE359.jpg

configure neutron controller node:

1. on keystone node

mysql -uroot -p
mysql> create database neutron;
mysql> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'NEUTRON-DBPASS';
mysql> grant all privileges on neutron.* to 'neutron'@'%' identified by 'NEUTRON-DBPASS';
mysql> flush privileges;

# Create a neutron user
keystone user-create --tenant service --name neutron --pass NEUTRON-USER-PASSWORD

# Add role to the neutron user
keystone user-role-add --user neutron --tenant service --role admin

# Create the neutron service
keystone service-create --name=neutron --type=network --description="Neutron Network Service"

# Create a Networking endpoint
keystone endpoint-create --region RegionOne --service neutron --publicurl=http://NEUTRON-SERVER:9696 --internalurl=http://NEUTRON-SERVER:9696 --adminurl=http://NEUTRON-SERVER:9696

2. on neutron server node, here we use keystone node on it

yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient

yum -y update iproute

yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm

reboot

3. vi /etc/neutron/neutron.conf
[database]
connection=mysql://neutron:neutron@MYSQL-SERVER/neutron
auth_strategy=keystone
auth_host=controller

auth_port = 35357

auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD

rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=controller
notify_nova_on_port_status_changes=True
notify_nova_on_port_data_changes=True
nova_url=http://controller:8774/v2
nova_admin_username=nova
nova_admin_tenant_id=$(keystone tenant-list | awk '/service/ { print $2 }')
nova_admin_password=NOVA-USER-PASSWORD
nova_admin_auth_url=http://controller:35357/v2.0
core_plugin=ml2
service_plugins=router
verbose = True

Comment out any lines in the [service_providers] section


4. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True

5. on nova controller node

vi /etc/nova/nova.conf
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://NEUTRON-SERVER:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=NEUTRON-USER-PASSWORD
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

vif_plugging_is_fatal=false
vif_plugging_timeout=0


7. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini

8. service openstack-nova-api restart; service openstack-nova-scheduler restart; service openstack-nova-conductor restart

9. chown -R neutron:neutron /etc/neutron /var/log/neutron

service neutron-server start; chkconfig neutron-server on

Neutron Network Node:
1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on

disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.30.0/24), it's recommended to use seperated nic for management network


vi /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=none


3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1
192.168.1.12    neutronnet


4. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

5. yum -y install  http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python

6. yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

yum -y update iproute

yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm

reboot


7. Enable packet forwarding and disable packet destination filtering
vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

sysctl -p

8. vi /etc/neutron/neutron.conf

auth_strategy = keystone
[keystone_authtoken]
auth_host=controller
auth_port = 35357
auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD
rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname=controller
core_plugin=ml2
service_plugins=router
verbose = True

Comment out any lines in the [service_providers] section


9. vi /etc/neutron/l3_agent.ini

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces=True
verbose = True


vi /etc/neutron/dhcp_agent.ini

interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
use_namespaces=True
verbose = True


10. vi /etc/neutron/metadata_agent.ini
auth_url = http://controller:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON-USER-PASSWORD
nova_metadata_ip = controller
metadata_proxy_shared_secret = METADATA-PASSWORD

verbose = True


11. on nova controller node
vi /etc/nova/nova.conf
neutron_metadata_proxy_shared_secret=METADATA-PASSWORD
service_neutron_metadata_proxy=true

service openstack-nova-api restart


12. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[ovs]

local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #192.168.30.12
tunnel_type = gre
enable_tunneling = True

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True


13. service openvswitch start; chkconfig openvswitch on

ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex eth2

ethtool -K eth2 gro off

ethtool -k eth2


vi /etc/sysconfig/network-scripts/ifcfg-eth2
ETHTOOL_OPTS="-K ${DEVICE} gro off"


14. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini


15. cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent

16. chown -R neutron:neutron /etc/neutron /var/log/neutron

for s in neutron-{dhcp,metadata,l3,openvswitch}-agent; do
service $s start
chkconfig $s on
done

Neutron Compute Node:

1. service NetworkManager stop; chkconfig NetworkManager off
service network start; chkconfig network on


disable firewall and selinux
service iptables stop; chkconfig iptables off
service ip6tables stop; chkconfig ip6tables off

2. eth0 for management/public/floating (192.168.1.0/24), eth1 for internal/flat (192.168.30.0/24), it's recommended to use seperated nic for management network


3. set hostname in /etc/sysconfig/network and /etc/hosts
192.168.1.10    controller
192.168.1.11    node1

192.168.1.12    neutronnet


4. yum -y install qemu-kvm libvirt python-virtinst bridge-utils  
# make sure modules are loaded
lsmod | grep kvm

service libvirtd start; chekconfig libvirtd on
service messagebus start; chkconfig messagebus on

5. yum -y install ntp
vi /etc/ntp.conf
server 192.168.1.10
restrict 192.168.1.10

service ntpd start; chkconfig ntpd on

6. yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
yum -y install mysql MySQL-python openstack-utils

7. yum install -y openstack-nova-compute


8. vi /etc/nova/nova.conf
connection=mysql://nova:NOVA-DATABASE-PASSWORD@MYSQL-SERVER/nova
auth_strategy=keystone
auth_host=controller
auth_port=35357
auth_protocol=http

auth_uri=http://controller:5000

admin_user=nova
admin_password=NOVA-USER-PASSWORD
admin_tenant_name=service
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_hostname=controller
my_ip=192.168.1.11
vnc_enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.11
novncproxy_base_url=http://controller:6080/vnc_auto.html
glance_host=controller


9. egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, do nothiong
If this command returns a value of zero, set libvirt_type=qemu in nova.conf

10. chown -R nova:nova /etc/nova /var/log/nova

service openstack-nova-compute start; chkconfig openstack-nova-compute on


now for neutron plugin agent:

11. disable packet destination filtering
vi /etc/sysctl.conf
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

sysctl -p

12. yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch

yum -y update iproute

yum -y install kernel-2.6.32-358.123.2.openstack.el6.x86_64.rpm

reboot


13. vi /etc/neutron/neutron.conf

auth_strategy = keystone
[keystone_authtoken]
auth_host=controller
auth_port = 35357
auth_protocol = http

auth_uri=http://controller:5000

admin_tenant_name=service
admin_user=neutron
admin_password=NEUTRON-USER-PASSWORD

rpc_backend=neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller

core_plugin=ml2
service_plugins=router
verbose = True

Comment out any lines in the [service_providers] section


14. vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers=gre
tenant_network_types=gre
mechanism_drivers=openvswitch

[ml2_type_gre]

tunnel_id_ranges=1:1000

[ovs]

local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS #192.168.30.11
tunnel_type = gre
enable_tunneling = True

[securitygroup]

firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True


15. service openvswitch start; chkconfig openvswitch on

ovs-vsctl add-br br-int


16. vi /etc/nova/nova.conf
network_api_class=nova.network.neutronv2.api.API
neutron_url=http://NEUTRON-SERVER:9696
neutron_auth_strategy=keystone
neutron_admin_tenant_name=service
neutron_admin_username=neutron
neutron_admin_password=NEUTRON-USER-PASSWORD
neutron_admin_auth_url=http://controller:35357/v2.0
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.firewall.NoopFirewallDriver
security_group_api=neutron

vif_plugging_is_fatal=false
vif_plugging_timeout=0


17. cd /etc/neutron
ln -s plugins/ml2/ml2_conf.ini plugin.ini


18. cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutronopenvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent


service openstack-nova-compute restart


19. chown -R neutron:neutron /etc/neutron /var/log/neutron

service neutron-openvswitch-agent start; chkconfig neutron-openvswitch-agent on

creating neutron network

on controller node:
1. to check neutron-server is communicating with its agents

neutron agent-list

wKioL1Ny4Z7ykifyAAM3tplSrvs725.jpg

source ~/adminrc (through step 1~2)

# create external network
neutron net-create ext-net --shared --router:external=True [ --provider:network_type gre --provider:segmentation_id SEG_ID ]

Note: SEG_ID is the tunnel id.


2. # create subnet on external network
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.1.200,end=192.168.1.210 --disable-dhcp --dns-nameserver 210.22.84.3 --dns-nameserver 210.22.70.3 --gateway 192.168.1.1 192.168.1.0/24

3. # create tenant network

source ~/demo1rc  (through step 3~7)

neutron net-create demo-net


4. # create subnet on tenant network
neutron subnet-create demo-net --name demo-subnet --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR

neutron subnet-create demo-net --name demo-subnet --dns-nameserver x.x.x.x --gateway 10.10.10.1 10.10.10.0/24

5. # create virtual router to connect external and tenant network
neutron router-create demo-router

6. # Attach the router to the tenant subnet
neutron router-interface-add demo-router demo-subnet

7. # Attach the router to the external network by setting it as the gateway
neutron router-gateway-set demo-router ext-net

Note: the tenant router gateway should occupy the lowest IP address inthe floating IP address range -- 192.168.1.200


neutron net-list

neutron subnet-list

neutron router-port-list demo-router


Launch Instances

for demo1 tenant:
source ~/demo1rc

neutron security-group-create --description "Test Security Group" test-sec

# permit ICMP
neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

# permit ssh
neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 test-sec

neutron security-group-rule-list


nova keypair-add demokey > demokey.pem
nova keypair-list


nova flavor-list
nova image-list

neutron net-list

neutron subnet-list


demonet=`neutron net-list | grep demo-net | awk '{ print $2 }'`
nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic net-id=$demonet CirrOS

Notes: you should have enough memory on KVM nodes, or you will not get instances created.


1. you can use vmware workstation to build images, then upload to glance using dashboard


wKiom1OXClyjw_6jAAKp6mYHmB4421.jpg

ubuntu
1). vi /etc/hosts to remove 127.0.1.1. item
2). enable ssh login
3). enable dhcp client on interface
4). enable normal username/password
5). set root password

centos/redhat
1). rm -rf /etc/ssh/ssh_host_*
2). vi /etc/sysconfig/network-scripts/ifcfg-ethX to remove HWADDR and UUID items
3). rm -rf /etc/udev/rules.d/70-persistent-net.rules
4). enable ssh login
5). enable dhcp client on interface (also vi /etc/sysconfig/network, /etc/resolv.conf)

6). enable normal username/password
7). set root password

2. launch instance without keypair


nova commands:

nova list; nova show CirrOS

nova stop CirrOS
nova start CirrOS


# get vnc console address via web browser:

wKioL1OX1B6Sypc8AAEv9d-QETE157.jpg

nova get-vnc-console CirrOS novnc


# Create a floating IP addresson the ext-net external network

neutron floatingip-create ext-net

neutron floatingip-list


# Associate the floating IP address with your instance even it's running

nova floating-ip-associate CirrOS 192.168.1.201

( nova floating-ip-disassociate cirros 192.168.1.201 )

nova list



ping 192.168.1.201 (floating ip)
using xshell or putty to ssh -i demokey.pem [email protected]  (username: cirros, password: cubswin:))

[ for ubuntu cloud image: username is ubuntu, for fedora cloud image: username is fedora ]

now we can ping and ssh to 192.168.1.201, and CirrOS can access Internet now.


Notes: you should have enough space in /var/lib/nova/instances for store VMs, you can mount partition to it ( using local or shared storages).


Fixed IP addresses with OpenStack Neutron for tenant networks

neutron subnet-list
neutron subnet-show demo-subnet
neutron port-create demo-net --fixed-ip ip_address=10.10.10.10 --name VM-NAME
nova boot --flavor 1 --image "CirrOS 0.3.2" --key-name demokey --security-groups test-sec --nic port-id=xxx VM-NAME


Access novnc console from Internet method1

1. add another interface face to Internet on nova controller (normally keystone+dashboard node)

2. assign a public ip address

3. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_nova_controller:6080/vnc_auto.html

service openstack-nova-compute restart


4. nova get-vnc-console CirrOS novnc

http://public_ip_address_of_nova_controller:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673


Access novnc console from Internet method2

1. you can publish dashboard web site to Internet (normally keystone+dashboard node)

wKioL1Ob-b7xzdoWAAH7Q80Qqtc638.jpg


2. on computer node, vi /etc/nova/nova.conf
novncproxy_base_url=http://public_ip_address_of_firewall:6080/vnc_auto.html

service openstack-nova-compute restart


3. nova get-vnc-console CirrOS novnc

http://public_ip_address_of_firewall:6080/vnc_auto.html?token=4f9c1f7e-4288-4fda-80ad-c1154a954673


你可能感兴趣的:(neutron)