作者:张华 发表于:2016-07-05
版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明
( http://blog.csdn.net/quqi99 )
网络拓扑
快速安装一个OpenStack IPv6环境
juju destroy-environment --force zhhuabj
juju-deployer -c bundles/ipv6/next-ipv6.yaml -d xenial-mitaka
./configure #但不使用它配置网络,网络由咱们下面手工配置
sudo add-apt-repository cloud-archive:mitaka
sudo apt-get update
sudo apt-get install --upgrade python-neutronclient
配置网络 & 创建虚机
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://[2001:db8:0:1:f816:3eff:fecd:1206]:5000/v2.0
neutron net-create --provider:network_type flat --provider:physical_network physnet1 --router:external=True ext_net
neutron net-create private
neutron address-scope-create --shared address-scope-ip4 4
neutron address-scope-create --shared address-scope-ip6 6
neutron subnetpool-create --address-scope address-scope-ip4 --shared --pool-prefix 10.5.150.0/24 --default-prefixlen 24 public-pool-ip4
neutron subnetpool-create --address-scope address-scope-ip4 --shared --pool-prefix 192.168.21.0/24 --default-prefixlen 24 --is-default true default-pool-ip4
neutron subnet-create --gateway 10.5.0.1 --enable_dhcp=False --name public-subnet-ip4 --subnetpool public-pool-ip4 ext_net
neutron subnet-create --name private-subnet-ip4 --subnetpool default-pool-ip4 private
neutron subnetpool-create --address-scope address-scope-ip6 --shared --pool-prefix 2001:db8:2::/64 --default-prefixlen 64 --max-prefixlen 64 --is-default true default-pool-ip6
neutron subnetpool-create --address-scope address-scope-ip6 --pool-prefix 2001:db8:3::/64 --default-prefixlen 64 public-pool-ipv6
neutron subnet-create --name public-subnet-ip6 --ip_version 6 --subnetpool public-pool-ipv6 ext_net
neutron subnet-create --name private-subnet-ip6 --ip_version 6 --use-default-subnetpool --ipv6-address-mode slaac --ipv6-ra-mode slaac private
neutron router-create provider-router
neutron router-interface-add provider-router private-subnet-ip4
neutron router-interface-add provider-router private-subnet-ip6
neutron router-gateway-set provider-router ext_net
./tools/sec_groups.sh
nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
NET_ID=$(neutron net-list |grep 'private' |awk '{print $2}')
nova boot --poll --key-name mykey --image trusty --flavor 2 --nic net-id=$NET_ID i1
安装Neutron BGP
1, neutron-api/0节点上
sudo vi /etc/neutron/neutron.conf
service_plugins = bgp,router,firewall,lbaas,aas,metering
sudo service neutron-server restart
2, neutron-gateway/0节点上
sudo apt install neutron-bgp-dragent python-ryu
sudo vi /etc/neutron/bgp_dragent.ini
[BGP]
bgp_speaker_driver = neutron.services.bgp.driver.ryu.driver.RyuBgpDriver
bgp_router_id = 10.5.3.191
sudo service neutron-bgp-dragent restart
配置Neutron BGP
neutron bgp-speaker-create --ip-version 6 --local-as 65412 --advertise-floating-ip-host-routes false bgp1
neutron bgp-speaker-network-add bgp1 ext_net
neutron bgp-speaker-advertiseroute-list bgp1
neutron bgp-peer-create --peer-ip 2001:db8:3::1 --remote-as 6301 bgppeer
neutron bgp-speaker-peer-add bgp1 bgppeer
AGENT_ID=$(neutron agent-list |grep bgp |awk '{print $2}')
neutron bgp-dragent-speaker-add $AGENT_ID bgp1
ubuntu@zhhuabj-bastion:~/openstack-charm-testing$ neutron bgp-speaker-advertiseroute-list bgp1
+-----------------+---------------+
| destination | next_hop |
+-----------------+---------------+
| 2001:db8:2::/64 | 2001:db8:3::3 |
+-----------------+---------------+
测试使用的Quagga配置/etc/quagga/bgpd.conf
ubuntu@zhhuabj-bastion:~$ sudo cat /etc/quagga/bgpd.conf
hostname zhhuabj-bastion
password zebra
log file /var/log/quagga/bgpd.log
log stdout
!
router bgp 6301
no synchronization
bgp router-id 10.230.56.15
!network 192.168.1.0/24
!neighbor 10.230.56.21 remote-as 65412
!neighbor 10.230.56.21 description test-v4
neighbor 2001:db8:3::2 remote-as 65412
neighbor 2001:db8:3::2 description test-v6
no auto-summary
no neighbor 2001:db8:3::2 activate
!
address-family ipv6
network 2001:db8:1::/48
network 2001:db8:1::/56
network 2001:db8:1::/64
neighbor 2001:db8:3::2 activate
neighbor 2001:db8:3::2 route-map IPV6-OUT out
exit-address-family
!
ipv6 prefix-list pl-ipv6 seq 10 permit 2001:db8:1::/56 le 64
route-map IPV6-OUT permit 10
match ipv6 address prefix-list pl-ipv6
set ipv6 next-hop global 2001:db8:3::1
!
line vty
!
debug bgp events
debug bgp filters
!debug bgp fsm
!debug bgp keepalives
debug bgp updates
验证
启动quagga后(sudo service quagga restart),看到的日志如下:
1, quagga端:
2016/07/07 04:28:41 BGP: [Event] BGP connection from host 2001:db8:3::2
2016/07/07 04:28:41 BGP: [Event] Make dummy peer structure until read Open packet
2016/07/07 04:28:41 BGP: 2001:db8:3::2 [Event] Transfer accept BGP peer to real (state Active)
2016/07/07 04:28:41 BGP: 2001:db8:3::2 [Event] Accepting BGP peer delete
2016/07/07 04:28:41 BGP: 2001:db8:3::2 rcvd UPDATE w/ attr: , origin i, mp_nexthop 2001:db8:3::3, path 65412
2016/07/07 04:28:41 BGP: 2001:db8:3::2 rcvd 2001:db8:2::/64
2016/07/07 04:28:42 BGP: Import timer expired.
2016/07/07 04:28:42 BGP: 2001:db8:3::2 send UPDATE 2001:db8:1::/56
2016/07/07 04:28:42 BGP: 2001:db8:3::2 send UPDATE 2001:db8:1::/64
2, neutron端:
2016-07-07 04:30:12.968 19320 DEBUG bgpspeaker.speaker [-] Received msg from ('2001:db8:3::1', '179') << BGPKeepAlive(len=19,type=4) _handle_msg /usr/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py:432
2016-07-07 04:30:12.996 19320 DEBUG bgpspeaker.speaker [-] Sent msg to ('2001:db8:3::1', '179') >> BGPKeepAlive(len=19,type=4) send /usr/lib/python2.7/dist-packages/ryu/services/protocols/bgp/speaker.py:400
3, quagga端
ubuntu@zhhuabj-bastion:~$ sudo vtysh
Hello, this is Quagga (version 0.99.22.4).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
zhhuabj-bastion# show bgp summary
BGP router identifier 10.5.0.3, local AS number 6301
RIB entries 5, using 560 bytes of memory
Peers 2, using 9120 bytes of memory
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
2001:db8:3::2 4 65412 15 19 0 0 0 00:03:10 1
Total number of neighbors 1
4, quagga端的路由已生成
ubuntu@zhhuabj-bastion:~$ route -n -6 |grep 2001:db8:2
2001:db8:2::/64 2001:db8:3::3 UG 1024 0 0 eth0
5, neutron-gateway/0端的路由已生成
ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-f9d537aa-1deb-4620-92ac-4f9feba7969f route -n -6 |grep 2001:db8:2
2001:db8:2::/64 :: U 256 1 7 qr-50f401dd-e0
2001:db8:2::/128 :: Un 0 1 0 lo
2001:db8:2::1/128 :: Un 0 2 4 lo
6, 可以直接ping虚机
ubuntu@zhhuabj-bastion:~$ ping6 -c 1 2001:db8:2:0:f816:3eff:fec8:cae9
PING 2001:db8:2:0:f816:3eff:fec8:cae9(2001:db8:2:0:f816:3eff:fec8:cae9) 56 data bytes
64 bytes from 2001:db8:2:0:f816:3eff:fec8:cae9: icmp_seq=1 ttl=63 time=6.75 ms
ubuntu@zhhuabj-bastion:~$ nova list
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
| a095e8c6-e96b-4abb-9354-6a4e7d82dbac | i1 | ACTIVE | - | Running | private=192.168.21.3, 2001:db8:2:0:f816:3eff:fec8:cae9, 10.5.150.3 |
+--------------------------------------+------+--------+------------+-------------+--------------------------------------------------------------------+
7, 网络节点上的iptables规则
IP是无连接的,因此IP路由是每包一路由的,数据包通过查找路由表获取路由,这是协议栈IP路由的默认处理方式。但是如果协议栈具有流识别能力,是不是可以基于流来路由呢?答案无疑是肯定的。在Linux的实现中,nf_conntrack可以做到基于流的IP路由,大致思想就是,仅仅针对一个流的第一个正向包和第一个反向包查找标准的IP路由表,将路由结果保存在conntrack项中(查找免不了,但是从路由查找换成了查找conntrack),后续的属于同一流的数据包直接取出路由项来使用。
在实现上,就是尽量在数据包离开协议栈的地方设置skb的路由到conntrack。之所以可以这么做是因为不管是POSTROUTING还是INPUT,都是在路由之后,如果前面进行了基于包的IP路由查找,此时skb上一定绑定了dst_entry,将其绑到conntrack里面即可。另外,在数据包刚进入协议栈的地方试图从conntrack项中取出路由,然后直接将其设置到skb上。Iptables中的CONNMARK与MARK都是打标记的(前者针对连接,后者针对单一数据包, 路由是单一数据包为单位的故IP命令只认MARK不认CONNMARK。故给所有ip rule匹配的单一数据包打标记时方法有二,一是用MARK直接打,二是用CONNMARK --restore-mark把打在连接上的标记转移到数据包上)。例如让所有端口为443的outgoing包只走一条线路:
# 第一个outgoing的包(tcp SYN),打上标记
iptables -t mangle -A PREROUTING -p tcp --dport 443 -m conntrack --ctstate NEW -j MARK --set-mark 0x01
# routing decision时,会选择与0x01标记对应的那条路由。然后,我们把这个包上的标记(0x01)转存到与之对应的连接上。--save-mark功能就在于此。
iptables -t mangle -A POSTROUTING -m conntrack --ctstate NEW -j CONNMARK --save-mark
# 这条连接后续的包,都用--restore-mark命令,把连接上的标记(上一条命令保存的)再打到每个单一数据包上。
iptables -t mangle -A PREROUTING -i br-lan -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark
ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-f9d537aa-1deb-4620-92ac-4f9feba7969f iptables-save
# Generated by iptables-save v1.6.0 on Thu Jul 7 04:48:21 2016
*raw
:PREROUTING ACCEPT [202:17151]
:OUTPUT ACCEPT [127:16465]
:neutron-l3-agent-OUTPUT - [0:0]
:neutron-l3-agent-PREROUTING - [0:0]
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
COMMIT
# Completed on Thu Jul 7 04:48:21 2016
# Generated by iptables-save v1.6.0 on Thu Jul 7 04:48:21 2016
*filter
:INPUT ACCEPT [6:1480]
:FORWARD ACCEPT [26:2184]
:OUTPUT ACCEPT [127:16465]
:neutron-filter-top - [0:0]
:neutron-l3-agent-FORWARD - [0:0]
:neutron-l3-agent-INPUT - [0:0]
:neutron-l3-agent-OUTPUT - [0:0]
:neutron-l3-agent-local - [0:0]
:neutron-l3-agent-scope - [0:0]
-A INPUT -j neutron-l3-agent-INPUT
-A FORWARD -j neutron-filter-top
-A FORWARD -j neutron-l3-agent-FORWARD
-A OUTPUT -j neutron-filter-top
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A neutron-filter-top -j neutron-l3-agent-local
-A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope
-A neutron-l3-agent-INPUT -m mark --mark 0x1/0xffff -j ACCEPT
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
-A neutron-l3-agent-scope -o qr-924618a4-4c -m mark ! --mark 0x4010000/0xffff0000 -j DROP
COMMIT
# Completed on Thu Jul 7 04:48:21 2016
# Generated by iptables-save v1.6.0 on Thu Jul 7 04:48:21 2016
*mangle
:PREROUTING ACCEPT [202:17151]
:INPUT ACCEPT [176:14967]
:FORWARD ACCEPT [26:2184]
:OUTPUT ACCEPT [127:16465]
:POSTROUTING ACCEPT [153:18649]
:neutron-l3-agent-FORWARD - [0:0]
:neutron-l3-agent-INPUT - [0:0]
:neutron-l3-agent-OUTPUT - [0:0]
:neutron-l3-agent-POSTROUTING - [0:0]
:neutron-l3-agent-PREROUTING - [0:0]
:neutron-l3-agent-float-snat - [0:0]
:neutron-l3-agent-floatingip - [0:0]
:neutron-l3-agent-mark - [0:0]
:neutron-l3-agent-scope - [0:0]
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A INPUT -j neutron-l3-agent-INPUT
-A FORWARD -j neutron-l3-agent-FORWARD
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-POSTROUTING -o qg-1842f86d-a3 -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000
-A neutron-l3-agent-PREROUTING -j neutron-l3-agent-mark
-A neutron-l3-agent-PREROUTING -j neutron-l3-agent-scope
-A neutron-l3-agent-PREROUTING -m connmark ! --mark 0x0/0xffff0000 -j CONNMARK --restore-mark --nfmask 0xffff0000 --ctmask 0xffff0000
-A neutron-l3-agent-PREROUTING -j neutron-l3-agent-floatingip
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x1/0xffff
-A neutron-l3-agent-float-snat -m connmark --mark 0x0/0xffff0000 -j CONNMARK --save-mark --nfmask 0xffff0000 --ctmask 0xffff0000
-A neutron-l3-agent-mark -i qg-1842f86d-a3 -j MARK --set-xmark 0x2/0xffff
-A neutron-l3-agent-scope -i qg-1842f86d-a3 -j MARK --set-xmark 0x4010000/0xffff0000
-A neutron-l3-agent-scope -i qr-924618a4-4c -j MARK --set-xmark 0x4010000/0xffff0000
COMMIT
# Completed on Thu Jul 7 04:48:21 2016
# Generated by iptables-save v1.6.0 on Thu Jul 7 04:48:21 2016
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [1:84]
:POSTROUTING ACCEPT [3:252]
:neutron-l3-agent-OUTPUT - [0:0]
:neutron-l3-agent-POSTROUTING - [0:0]
:neutron-l3-agent-PREROUTING - [0:0]
:neutron-l3-agent-float-snat - [0:0]
:neutron-l3-agent-snat - [0:0]
:neutron-postrouting-bottom - [0:0]
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 10.5.150.3/32 -j DNAT --to-destination 192.168.21.3
-A neutron-l3-agent-POSTROUTING ! -i qg-1842f86d-a3 ! -o qg-1842f86d-a3 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 10.5.150.3/32 -j DNAT --to-destination 192.168.21.3
-A neutron-l3-agent-float-snat -s 192.168.21.3/32 -j SNAT --to-source 10.5.150.3
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-1842f86d-a3 -m connmark --mark 0x4010000/0xffff0000 -j ACCEPT
-A neutron-l3-agent-snat -o qg-1842f86d-a3 -j SNAT --to-source 10.5.150.2
-A neutron-l3-agent-snat -m mark ! --mark 0x2/0xffff -m conntrack --ctstate DNAT -j SNAT --to-source 10.5.150.2
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
COMMIT
# Completed on Thu Jul 7 04:48:21 2016
ubuntu@juju-zhhuabj-machine-12:~$ sudo ip netns exec qrouter-f9d537aa-1deb-4620-92ac-4f9feba7969f ip addr show
2: qr-924618a4-4c@if14: mtu 1458 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:9a:2e:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.21.1/24 brd 192.168.21.255 scope global qr-924618a4-4c
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe9a:2ec2/64 scope link
valid_lft forever preferred_lft forever
3: qr-50f401dd-e0@if15: mtu 1458 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:41:1e:7e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 2001:db8:2::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe41:1e7e/64 scope link
valid_lft forever preferred_lft forever
4: qg-1842f86d-a3@if16: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fa:16:3e:39:ac:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.5.150.2/24 brd 10.5.150.255 scope global qg-1842f86d-a3
valid_lft forever preferred_lft forever
inet 10.5.150.3/32 brd 10.5.150.3 scope global qg-1842f86d-a3
valid_lft forever preferred_lft forever
inet6 2001:db8:3::3/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe39:ac3a/64 scope link
valid_lft forever preferred_lft forever
参考
[1] https://cloudbau.github.io/openstack/neutron/networking/2016/05/17/neutron-ipv6.html
[2] http://docs.openstack.org/mitaka/networking-guide/adv-config-bgp-dynamic-routing.html
[3] https://wiki.openstack.org/wiki/Neutron/DynamicRouting/TestingDynamicRouting
[4] http://xmodulo.com/ipv6-bgp-peering-filtering-quagga-bgp-router.html
[5] http://wp.mindless.gr/2011/07/dual_stack_bgp_configuration_quagga/
[6] https://www.hostvirtual.com/kb/6167/sample-quagga-bgp-ipv4-ipv6-dual-stack-configuration.html
[7] http://www.occaid.org/tutorial-ipv6bgp.html