OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】(二)——网络节点的安装

欢迎各位关注我的微博http://weibo.com/u/216633637  

 

序:OpenStack Havana 部署在Ubuntu 12.04 Server 【OVS+GRE】

 

网络节点:

1.安装前更新系统 

  • 安装好ubuntu 12.04 Server 64bits后,进入root模式下完成配置:

sudo su - 
  • 添加Havana源:
#apt-get install python-software-properties

#add-apt-repository cloud-archive:havana
  • 升级系统:

  • apt-get update
    
    apt-get upgrade
    
    apt-get dist-upgrade

 

2.安装更新ntp服务

  • 安装ntp服务:
apt-get install ntp
  • 配置ntp服务从控制节点上同步时间:
 
sed -i 's/server 0.ubuntu.pool.ntp.org/#server 0.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server 1.ubuntu.pool.ntp.org/#server 1.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server 2.ubuntu.pool.ntp.org/#server 2.ubuntu.pool.ntp.org/g' /etc/ntp.conf

sed -i 's/server 3.ubuntu.pool.ntp.org/#server 3.ubuntu.pool.ntp.org/g' /etc/ntp.conf



#Set the network node to follow up your conroller node

sed -i 's/server ntp.ubuntu.com/server 10.10.10.2/g' /etc/ntp.conf



service ntp restart
 

 

  • 网卡配置,这一步有一定的亮点,因为现在我手里只有一块网卡,但是要配三个网段,最后会生成三个网桥,因此我这里还是用网络别名设备来配,安装OVS之前,网卡配置如下:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.122.3
netmask 255.255.255.0
gateway 192.168.122.1
dns-nameservers 192.168.122.1

auto eth0:1
iface eth0:1 inet static
address 10.10.10.3
netmask 255.255.255.0

auto eth0:2
iface eth0:2 inet static
address 10.20.20.3
netmask 255.255.255.0

 

 

  • 编辑/etc/sysctl.conf,开启路由转发和关闭包目的过滤,这样网络节点能协作VMs的traffic。
net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

#运行下面命令,使生效
sysctl -p

 

3.安装OpenVSwitch

  • 安装OpenVSwitch软件包:
apt-get install  openvswitch-controller openvswitch-switch openvswitch-datapath-dkms openvswitch-datapath-source
module-assistant auto-install openvswitch-datapath
/etc/init.d/openvswitch-switch restart
  • 创建网桥
#br-int will be used for VM integration

ovs-vsctl add-br br-int



#br-ex is used to make to VM accessable from the internet

ovs-vsctl add-br br-ex
  • 把网卡eth0加入br-ex:
ovs-vsctl add-port br-ex eth0
  • 重新修改网卡配置/etc/network/interfaces:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br-ex
iface br-ex inet static
address 192.168.122.3
netmask 255.255.255.0
gateway 192.168.122.1
dns-nameservers 192.168.122.1

 

#For Exposing OpenStack API over the internet

auto eth0
iface eth0 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

 

auto eth0:1
iface eth0:1 inet static
address 10.10.10.3
netmask 255.255.255.0

auto eth0:2
iface eth0:2 inet static
address 10.20.20.3
netmask 255.255.255.0

 

  • 重启网络服务:
/etc/init.d/networking restart

 

eth0让网桥br-ex接管之后,访问外网就都br-ex处理了。不要忘了,我们只有一块网卡,接在同一个“交换机上”,所以你要注意一下eth0:1,eth0:2的route设置。

一切正常的话,输入route命令的输出应该如下:

Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.122.1 0.0.0.0 UG 100 0 0 br-ex
10.10.10.0 * 255.255.255.0 U 0 0 0 br-ex
10.20.20.0 * 255.255.255.0 U 0 0 0 br-ex
192.168.122.0 * 255.255.255.0 U 0 0 0 br-ex

 

或者对应的ip route show 的输出为:

root@Network:~# ip route show
default via 192.168.122.1 dev br-ex metric 100
10.10.10.0/24 dev br-ex proto kernel scope link src 10.10.10.3
10.20.20.0/24 dev br-ex proto kernel scope link src 10.20.20.3
192.168.122.0/24 dev br-ex proto kernel scope link src 192.168.122.3

没错,10.10.10.0/24 与10.20.20.0/24指定的路由设备都是br-ex,否则你ping 控制节点(10.10.10.2),是ping不通的。如果这两者指定的iface还是eth0,你应该按照如下处理:

 

route del -net 10.10.10.0/24 dev eth0
route del -net 10.20.20.0/24 dev eth0
ip route add 10.10.10.0/24 proto kernel scope link src 10.10.10.3 dev br-ex
ip route add 10.20.20.0/24 proto kernel scope link src 10.20.20.3 dev br-ex

 

为了每次重启主机之后,也能按照上面的网卡设置,你可以将上述内容加入到/etc/rc.local的脚本。当然,如果是在物理机上的单网卡,设置别名设备的时候,可以直接设为br-ex:1,br-ex:2,应该就没什么问题。但是如果在KVM的虚拟机上,即使用br-ex设置别名,你也要用ip route 设置为proto kernel scope link的属性。

 

  • 查看网桥配置:
root@network:~# ovs-vsctl list-br

br-ex

br-int



root@network:~# ovs-vsctl show

    Bridge br-int

        Port br-int

            Interface br-int

                type: internal

    Bridge br-ex

        Port "eth0"

            Interface "eth0"

        Port br-ex

            Interface br-ex

                type: internal

    ovs_version: "1.4.0+build0"

 

4.Neutron-*

  • 安装Neutron组件:
apt-get install neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent
  • 编辑/etc/neutron/api-paste.ini
 
[filter:authtoken]

paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

auth_host = 10.10.10.2

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = neutron

admin_password = admin
 
  • 编辑OVS配置文件:/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
 
[OVS]

tenant_network_type = gre

enable_tunneling = True

tunnel_id_ranges = 1:1000

integration_bridge = br-int

tunnel_bridge = br-tun

local_ip = 10.20.20.3



#Firewall driver for realizing neutron security group function

[SECURITYGROUP]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
 
  • 更新/etc/neutron/metadata_agent.ini:
 
auth_url = http://10.10.10.2:35357/v2.0

auth_region = RegionOne

admin_tenant_name = service

admin_user = neutron

admin_password = admin



# IP address used by Nova metadata server

nova_metadata_ip = 10.10.10.2

    

# TCP Port used by Nova metadata server

nova_metadata_port = 8775



metadata_proxy_shared_secret = helloOpenStack
 
  • 编辑/etc/neutron/neutron.conf
 
rabbit_host = 10.10.10.2

    

[keystone_authtoken]

auth_host = 10.10.10.2

auth_port = 35357

auth_protocol = http

admin_tenant_name = service

admin_user = neutron

admin_password = admin

signing_dir = /var/lib/quantum/keystone-signing



[database]

connection = mysql://neutronUser:[email protected]/neutron
 
  • 编辑/etc/neutron/l3_agent.ini:
 
[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

signing_dir = /var/cache/neutron

admin_tenant_name = service

admin_user = neutron

admin_password = admin

auth_url = http://10.10.10.2:35357/v2.0

l3_agent_manager = neutron.agent.l3_agent.L3NATAgentWithStateReport

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
 
  • 编辑/etc/neutron/dhcp_agent.ini:
[DEFAULT]

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

signing_dir = /var/cache/neutron

admin_tenant_name = service

admin_user = neutron

admin_password = admin

auth_url = http://10.10.10.2:35357/v2.0

dhcp_agent_manager = neutron.agent.dhcp_agent.DhcpAgentWithStateReport

root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf

state_path = /var/lib/neutron
  • 重启服务:
cd /etc/init.d/; for i in $( ls neutron-* ); do service $i restart; done

 

        网络节点的服务部署完毕,下面就是计算节点的安装了

 

          

 

你可能感兴趣的:(ubuntu 12.04)