openstack newton Linuxbridge改ovs并配置dvr


这几天一直在搞dvr,现在终于搞好了。网上的资料比较杂乱,期间也一直在各种尝试,步骤也很繁琐而且混乱,坑比较多,现在整理一下。

官方安装文档从前几个版本开始在 配置网络的时候就由ovs改成了Linuxbridge,原因有很多,这里不讨论。我是按照官方文档进行部署的,自然使用的是Linuxbridge。DVR中文名叫分布式路由,可以在计算节点实现东西向流量的转发,不用再绕道控制节点(从M版开始网络节点并入了控制节点),这样就能够减轻控制节点的负载。有资料显示DVR只能部署在ovs上,而且我在网络上看到的所有关于DVR部署的文档都是基于ovs的,所有我必须将Linuxbridge换成ovs。

先说一下我的openstack环境状况:一个控制节点,两个计算节点,控制节点的第一块网卡IP地址为192.168.1.51,用作管理网,第二块网卡无IP地址,作为provider网络,负责与外网通信。计算节点1的第一块网卡IP地址为192.168.1.71,连接管理网。计算节点2的第一块网卡IP地址为192.168.1.72,连接管理网。我使用的是self-service网络。系统是Ubuntu16.04,所有安装步骤都是按照N版官方文档进行的。

如果你已经安装了Linuxbridge,那么需要先卸载Linuxbridge

# apt purge neutron-Linuxbridge-agent

然后在数据库中删除neutron数据库

mysql> DROP DATABASE neutron;

一、下边是N版安装ovs的步骤:

1、在网络节点上

首先进入数据库

# mysql -u root -p

然后创建neutron数据库

mysql> CREATE DATABASE neutron;

给数据库赋权

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
你自己选一个合适的密码替换NEUTRON_DBPASS

注意,为了简单起见,我的所有的密码都是123456

使用admin credentials

# . admin-openrc

创建service credentials

# openstack user create --domain default --password-prompt neutron
# openstack role add --project service --user neutron admin
# openstack service create --name neutron --description "OpenStack Networking" network

创建Networking service API endpoints

# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696

安装组件

# apt install neutron-server neutron-plugin-ml2 neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent neutron-openvswitch-agent

/etc/neutron/neutron.conf配置如下

[DEFAULT]
rpc_backend = rabbit
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[database]
connection = mysql+pymysql://neutron:123456@controller/neutron
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 123456
[nova]
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = 123456
[oslo_messaging_rabbit]
rabbit_host = 192.168.1.51
rabbit_userid = openstack
rabbit_password = 123456

/etc/neutron/plugins/ml2/ml2_conf.ini配置如下

[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_ipset = True

/etc/nova/nova.conf配置如下

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
state_path=/var/lib/nova
force_dhcp_release=True
verbose=True
ec2_private_dns_show_ip=True
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.1.51
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[database]
connection = mysql+pymysql://nova:123456@controller/nova
[api_database]
connection = mysql+pymysql://nova:123456@controller/nova_api
[oslo_concurrency]
lock_path=/var/lock/nova
[libvirt]
use_virtio_for_bridges=True
[wsgi]
api_paste_config=/etc/nova/api-paste.ini
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = True
metadata_proxy_shared_secret = 123456

/etc/neutron/plugins/ml2/openvswitch_agent.ini配置如下

[agent]
tunnel_types = vxlan
l2_population = True
prevent_arp_spoofing = True
[ovs]
local_ip = 192.168.1.51
bridge_mappings = provider:br-ex
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/neutron/l3_agent.ini配置如下

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =

/etc/neutron/dhcp_agent.ini配置如下

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True

/etc/neutron/metadata_agent.ini配置如下

[DEFAULT]
nova_metadata_ip = 192.168.1.51
metadata_proxy_shared_secret = 123456

同步数据库

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启服务

# service nova-api restart
# service neutron-server restart
# service openvswitch-switch restart
# service neutron-openvswitch-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
# service neutron-l3-agent restart

新建一个外部网络桥接

# ovs-vsctl add-br br-ex

将外部网络桥接映射到网卡,这里我的第二块网卡是连接外网的

ovs-vsctl add-port br-ex eno2

再次重启服务

# service nova-api restart
# service neutron-server restart
# service openvswitch-switch restart
# service neutron-openvswitch-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
# service neutron-l3-agent restart

2、在计算节点上(以计算节点1为例)

在/etc/sysctl.conf中添加

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

然后使之生效

# sysctl -p

安装组件

# apt-get install -y neutron-openvswitch-agent

/etc/neutron/neutron.conf配置如下

[DEFAULT]
rpc_backend = rabbit
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = 123456
[oslo_messaging_rabbit]
rabbit_host = 10.0.0.11
rabbit_userid = openstack
rabbit_password = 123456

/etc/neutron/plugins/ml2/openvswitch_agent.ini配置如下

[agent]
tunnel_types = vxlan
l2_population = True
prevent_arp_spoofing = True
[ovs]
local_ip = 192.168.1.71
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/nova/nova.conf配置如下

[neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

完成上述配置之后,重启计算服务

# service nova-compute restart

然后重启ovs

# service openvswitch-switch restart
# service neutron-openvswitch-agent restart

至此全部配置完成,在控制节点上查看网络服务是否正常

# openstack network agent list

如果配置正常,那么控制节点上的neutron-metadata-agent、neutron-openvswitch-agent、neutron-l3-agent、neutron-dhcp-agent,计算节点上的neutron-openvswitch-agent应该是正常工作的

二、配置DVR

在完成ovs的配置之后,可以进行DVR的配置

1、在控制节点上

在/etc/neutron/neutron.conf中相应模块加入下列内容

[DEFAULT]
l3_ha = True
router_distributed = True

在/etc/neutron/plugins/ml2/ml2_conf.ini中相应模块加入下列内容

[agent]
l2_population = True
enable_distributed_routing = True
arp_responder = True

在/etc/neutron/l3_agent.ini中相应模块加入下列内容

[DEFAULT]
ha_vrrp_auth_password = password
agent_mode = dvr_snat

配置完成后重启服务

# service nova-api restart
# service openvswitch-switch restart
# service neutron-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart

2、在计算节点上(以计算节点1为例)

编辑/etc/sysctl.conf,修改为

net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

然后载入配置

# sysctl -p

在计算节点上安装相关服务

# apt-get install neutron-l3-agent  neutron-metadata-agent neutron-plugin-ml2

在/etc/neutron/plugins/ml2/ml2_conf.ini中相应模块加入下列内容

[ml2]
mechanism_drivers = openvswitch,l2population
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.1.71
bridge_mappings = external:br-ex
[agent]
l2_population = True
tunnel_types = vxlan
enable_distributed_routing = True
arp_responder = True

在/etc/neutron/l3_agent.ini中相应模块加入下列内容

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
agent_mode = dvr

在/etc/neutron/metadata_agent.ini中相应模块加入下列内容

[DEFAULT]
auth_uri = http://192.168.1.51:5000
auth_url = http://192.168.1.51:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = 123456
nova_metadata_ip = 192.168.1.51
metadata_proxy_shared_secret = 123456

重启服务

# service nova-compute restart
# service openvswitch-switch restart
# service neutron-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-metadata-agent restart

至此DVR配置完毕,大家可以在同一个计算节点的不同网络起两个实例,通过分布式路由连接它们,用tcpdump监听分布式路由的端口和计算节点的网卡,当两个实例通信的时候你会发现分布式路由的端口上监测到了流量,而计算节点的网卡上没有流量产生。


openstack包含的内容非常多,我可能连皮毛都没摸到,需要学习的还有非常多,希望今后能与各位一同进步。文章中可能存在不足,欢迎大家指正。



参考

https://docs.openstack.org/newton/install-guide-ubuntu/neutron-controller-install.html

https://docs.openstack.org/newton/install-guide-ubuntu/neutron-compute-install.html

https://kairen.gitbooks.io/openstack-ubuntu-newton/content/ubuntu-binary/neutron/

OpenStack

你可能感兴趣的:(openstack,openstack,ovs配置,dvr配置,ubuntu)