就像我们前面学到的知识提到的一样,在虚拟化中,关联到虚拟桥上的网络设备我们叫做TAP devices. 如果是在物理环境中,它就相当于一个网络线,在虚拟机和桥之间进行连接,传送以太网帧。TAP设备也是在内核中TUN/TAP设备的一部分。
在学习其它网络知识前,先简要的说一下怎么创建桥和添加TAP设备到桥上去。
1. 查看是否加载bridge模块
[root@localkvm-1 ~]# lsmod | grep bridge
bridge 107106 1 ebtable_broute
stp 12976 1 bridge
llc 14552 2 stp,bridge
2. 添加桥
[root@localkvm-1 ~]# brctl addbr tester
3. 显示系统已经有的桥
[root@localkvm-1 ~]# brctl show
bridge name bridge id STP enabled interfaces
tester 8000.000000000000 no
virbr0 8000.000c29def17a yes ens38
virbr0-nic
vnet0
4. 检查TUN/TAP设备模块是否加载
[root@localkvm-1 ~]# lsmod | grep tun
tun 27226 4 vhost_net
5. 创建tap设备
[root@localkvm-1 ~]# ip tuntap add dev vm-vnic mode tap
[root@localkvm-1 ~]# ip link show vm-vnic
17: vm-vnic: mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether aa:33:de:d8:bc:80 brd ff:ff:ff:ff:ff:ff
6. 添加vm-vnic tap设备到我们的桥上
[root@localkvm-1 ~]# brctl addif tester vm-vnic
[root@localkvm-1 ~]# brctl show
bridge name bridge id STP enabled interfaces
tester 8000.aa33ded8bc80 no vm-vnic
virbr0 8000.000c29def17a yes ens38
virbr0-nic
vnet0
7.从这tester桥上移除vm-vnic tap设备
[root@localkvm-1 ~]# brctl show tester
bridge name bridge id STP enabled interfaces
tester 8000.000000000000 no
8. 移除 tap设备vm-vnic
[root@localkvm-1 ~]# ip tuntap del dev vm-vnic mode tap
9. 移除tester 桥设备
$ sudo brctl delbr tester; echo $?
0
二. 虚拟网络的有效类型:
- 隔离虚拟网络
- 路由虚拟网络
- NATed虚拟网络
- 使用物理设备、VLAN接口、bond接口、bond vlan进行桥接的网络
- MacVTap
- PCI透传NPIV
- OVS
1. 隔离网络
创建隔离网络的两种方法:
1. virt-manager 或virt-viewer方法省略
2. 使用virsh命令创建
(1)创建一个文件,如下所示:
[root@localkvm-1 ~]# cat isolated.xml
isolated
(2)定义先前的配置文件并查看是否生效
[root@localkvm-1 ~]# virsh net-define isolated.xml
Network isolated defined from isolated.xml
[root@localkvm-1 ~]# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
isolated inactive no yes
(3)去获取我们定义的隔离网络的详细信息
[root@localkvm-1 ~]# virsh net-dumpxml isolated
isolated
e595a02d-ab27-482f-9a64-daed799e9341
注意:当我们通过net-define定义好我们隔离网络后,配置文件会存放在/etc/libvirt/qemu/networks下面。
[root@localkvm-1 ~]# cat /etc/libvirt/qemu/networks/isolated.xml
isolated
e595a02d-ab27-482f-9a64-daed799e9341
(4)现在我们去激活我们定义的隔离网络并验证一下是否激活了。
[root@localkvm-1 ~]# virsh net-start isolated
Network isolated started
[root@localkvm-1 ~]# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
isolated active no yes
校验步骤:
我们给已经存的虚拟机添加一个虚拟网卡,这个网卡使用我们添加的隔离网络。
(1)查看现有虚拟主机的网络
[root@localkvm-1 ~]# virsh domiflist centos72
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 bridge virbr0 virtio 52:54:00:58:e9:0f
(2)现在我们关联一个新的接口到centos72并查看:
[root@localkvm-1 ~]# virsh attach-interface --domain centos72 --source isolated --type network --model virtio --config --live
Interface attached successfully
[root@localkvm-1 ~]# virsh domiflist centos72
Interface Type Source Model MAC
-------------------------------------------------------
vnet0 bridge virbr0 virtio 52:54:00:58:e9:0f
vnet1 network isolated virtio 52:54:00:1f:4f:e2
在这里我们有两个选项需要注意:
- --config: 这个选项是代表在VM下次重新启动时,我们的配置仍然生效。
- --live:这是通知libvirt我们关联的是一个活动的虚拟机,如果你虚拟机是关闭状态,那么使用时就不要使得--live选项了。
我们来校验我们添加的接口和网络桥如何关联的,使用virsh dumpxml isolated
[root@localkvm-1 ~]# virsh net-dumpxml isolated
isolated
e595a02d-ab27-482f-9a64-daed799e9341
从上面我们可以看出,是关联的virbr1的网桥。
[root@localkvm-1 ~]# brctl show virbr1
bridge name bridge id STP enabled interfaces
virbr1 8000.52540052d0b2 yes virbr1-nic
(3)现在我们从我们刚才添加的虚拟机上,把我们添加的网卡进行删除。
[root@localkvm-1 ~]# virsh detach-interface --domain centos72 --type network --mac 52:54:00:1f:4f:e2 --config --live
Interface detached successfully
三. 路由虚拟网络
创建路由模式的网络有两种方法:
**1. virt-manager或virt-view略
- 使用virsh创建一个routed类型的网络**
(1)创建routed.xml文件如下
[root@localkvm-1 ~]# cat routed.xml
routed
(2)定义routed
[root@localkvm-1 ~]# virsh net-define routed.xml
Network routed defined from routed.xml
(3)定义自动启动
[root@localkvm-1 ~]# virsh net-start routed
Network routed started
[root@localkvm-1 ~]# virsh net-autostart routed
Network routed marked as autostarted
(4)查看详细信息
[root@localkvm-1 ~]# virsh net-info routed
Name: routed
UUID: 6e679a74-e6ab-4885-a8ee-89cb7142fcc0
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr2
(5)编辑虚拟网络,在编辑虚拟网络前,需要先停止虚拟网络。
[root@localkvm-1 ~]# virsh net-destroy routed
Network routed destroyed
[root@localkvm-1 ~]# virsh net-edit routed
routed
6e679a74-e6ab-4885-a8ee-89cb7142fcc0
(6)在编辑完,强烈建议通过net-dumpxml进行检查一下。
[root@localkvm-1 ~]# virsh net-dumpxml routed
routed
6e679a74-e6ab-4885-a8ee-89cb7142fcc0
(7)然后启动网络
[root@localkvm-1 ~]# virsh net-start routed
Network routed started
四. NATed 虚拟网络,如下所示:
先决条件:
确认iptables和Iproute的包已经安装,因为libvirt依赖这些包。
配置有两种方法;
**1. 使用virt-manager或virt-viewer图形化方式创建 (具体步骤略)
- 使用iptables进行创建 **
(1) 列出有效的网络:
root@kvm:~# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
(2)显示默认的网络配置
root@kvm:~# virsh net-dumpxml default
default
2ab5d22c-5928-4304-920e-bc43b8731bcf
(3)查看下默认网络的XML的定义
root@kvm:~# cat /etc/libvirt/qemu/networks/default.xml
default
(4)列出主机上所有运行的实例
root@kvm:~# virsh list --all
Id Name State
----------------------------------------------------
3 kvm1 running
(5)确保KVM的实例网络是连接到默认的桥上的。
root@kvm:~# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.fe5400559bd6 yes vnet0
(6)创建一个新的NAT网络定义
root@kvm:~# cat nat_net.xml
nat_net
(7) 定义新的网络
root@kvm:~# virsh net-define nat_net.xml
Network nat_net defined from nat_net.xml
root@kvm:~# virsh net-list --all
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
nat_net inactive no yes
(8)开启新的网络和开启自动启动
root@kvm:~# virsh net-start nat_net
Network nat_net started
root@kvm:~# virsh net-autostart nat_net
Network nat_net marked as autostarted
root@kvm:~# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
nat_net active yes yes
(9)获得新网络的详细信息
root@kvm:~# virsh net-info nat_net
Name: nat_net
UUID: fba2ca2b-8ca7-4dbb-beee-14799ee04bc3
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr1
(10)编辑KVM1实例的XML定义和改变这个源网络的名字
root@kvm:~# virsh edit kvm1
...
...
...
...
Domain kvm1 XML configuration edited.
(11)重新启动KVM的虚拟机
root@kvm:~# virsh destroy kvm1
Domain kvm1 destroyed
root@kvm:~# virsh start kvm1
Domain kvm1 started
(12)列出这个主机上的网络桥。
root@kvm:~# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes
virbr1 8000.525400ba8e2c yes virbr1-nic
vnet0
(13)连接KVM的实例,校验下网络配置是否正确
root@kvm:~# virsh console kvm1
Connected to domain kvm1
Escape character is ^]
Debian GNU/Linux 8 debian ttyS0
debian login: root
Password:
...
root@debian:~# ip a s eth0 | grep inet
inet 10.10.10.92/24 brd 10.10.10.255 scope global eth0
inet6 fe80::5054:ff:fe55:9bd6/64 scope link
root@debian:~# ifconfig eth0 up && dhclient eth0
root@debian:~# ping 10.10.10.1 -c 3
PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.313 ms
64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.136 ms
64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.253 ms
--- 10.10.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.136/0.234/0.313/0.073 ms
(14)在KVM的宿主机上,DHCP服务是否运行。
root@kvm:~# pgrep -lfa dnsmasq
38983 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
40098 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/nat_net.conf
(15)检查新桥的接口IP
root@kvm:~# ip a s virbr1
43: virbr1: mtu 1500 qdisc noqueue state UP group default
link/ether 52:54:00:ba:8e:2c brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 brd 10.10.10.255 scope global virbr1
valid_lft forever preferred_lft forever
(16)针对NAT table列出iptables的规则
root@kvm:~# iptables -L -n -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
RETURN all -- 10.10.10.0/24 224.0.0.0/24
RETURN all -- 10.10.10.0/24 255.255.255.255
MASQUERADE tcp -- 10.10.10.0/24 !10.10.10.0/24 masq ports: 1024-65535
MASQUERADE udp -- 10.10.10.0/24 !10.10.10.0/24 masq ports: 1024-65535
MASQUERADE all -- 10.10.10.0/24 !10.10.10.0/24
RETURN all -- 192.168.122.0/24 224.0.0.0/24
RETURN all -- 192.168.122.0/24 255.255.255.255
MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24
RETURN all -- 192.168.122.0/24 224.0.0.0/24
RETURN all -- 192.168.122.0/24 255.255.255.255
MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24
注意一点:在使用NAT时,记得在LINUX宿主机上开启包转发功能。
sysctl net.ipv4.ip_forward=1
五. 使用物理网络适配器,VLAN接口,bond接口,bonded VLAN接口桥接网络。
我们这里有三个网卡:eth0, eth1 和eth2. eth0有一个IP,它作为管理网络:eth1和eth2专门用于桥的配置没有分配IP地址。
首先是物理接口不要分配IP地址。但可以配置包括VLAN,bonding等。
当物理接口配置好后,添加接口到桥上。可以是单一的物理接口,如eth1,绑定接口bond0, VLAN接口(eth1.121或者bond0.121) 等等。
然后,我可以选择地给桥配置IP,记住不是物理接口呀。
我们首先是创建一个桥,叫做br0,使用eth0。
然后创建以下文件,ifcfg-eth1 和ifcfg-br0
$ sudo cd /etc/sysconfig/network-scripts`
$ sudo cat ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
#Replace the following with your eth1 interface MAC address
HWADDR=52:54:00:32:56:aa
ONBOOT=yes
#Prevent Network Manager from managing this interface,eth1
NM_CONTROLLED=no
#Add this interface to bridge br0
BRIDGE=br0
$ sudo cat ifcfg-br0
DEVICE=br0
#Initiate bridge creation process for this interface br0
TYPE=Bridge
ONBOOT=yes
NM_CONTROLLED=no
#Set the bridge forward delay to 0.
DELAY=0
Enable the network service and start it.
$ sudo systemctl enable network
$ sudo systemctl disable NetworkManager
$ sudo ifup br0; ifup eth1
$ sudo brctl show
我们现在创建一个bond(bond0),并且使用eth1和eth2,然后把它添加到br0桥上。
$ sudo ifdown br0; ifdown eth1
$ sudo cat ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
HWADDR=52:54:00:32:56:aa
ONBOOT=yes
NM_CONTROLLED=no
SLAVE=yes
MASTER=bond0
$ sudo cat ifcfg-eth2
DEVICE=eth2
TYPE=Ethernet
HWADDR=52:54:00:a6:02:51
ONBOOT=yes
NM_CONTROLLED=no
SLAVE=yes
MASTER=bond0
$ sudo cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
#Here we are using bonding mode 1 (active-backup)
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=br0
NM_CONTROLLED=no
sudo ifup bond0
sudo brctl show
下面的图可以解释当前的配置:
![](https://s1.51cto.com/images/blog/201806/04/2dd567f97d1b438fd53e6d65ba444790.jpeg?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk
使用bond时,我们建议使用模式一(active-backup)和模式四802.3ad。
修改ifcfg-bond0,创建一个名字为bond0.123的VLAN tagged并把它关联到br0桥上。
$ sudo ifdown bond0; ifdown br0
$ sudo cp ifcfg-bond0 ifcfg-bond0.123
$ sudo cat ifcfg-bond0.123
DEVICE=bond0.123
ONBOOT=yes
BONDING_OPTS='mode=1 miimon=100'
BRIDGE=br0
NM_CONTROLLED=no
VLAN=yes
Now edit ifcfg-bond0 and comment out BRIDGE=bro (#BRIDGE=br0):
$ sudo ifup bond0.123
第六种: MacVTap
MacVTap是当我们不想创建一个正常的桥时,我们只是想通过本地网络去访问我们的虚拟机而已。这个连接类型在生产环境中不常使用,主要使用在一个工作站系统中。由于不常用,所以我们这里给一些案例即可:
Navigate to Add Hardware | Network to add a virtual NIC as the MacVTap interface using virt-manager. At Network source, select the physical NIC interface on the host where you want to enable MacVTap:
The following is the corresponding configuration from the VM:
六: PCI的透传:
这相当于虚拟机直接使用我们的物理PCI设备一样,增强了虚拟机的性能。
To enable PCI passthrough, you have to use the following steps:
Enable Intel VT-d or AMD IOMMU in the BIOS and kernel:
$ sudo vi /etc/sysconfig/grub
Modify GRUB_CMDLINE_LINUX= to append intel_iommu=on or amd_iommu=on:
Rebuild the grub2 configuration file as follows and then reboot the hypervisor:
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
Navigate to Hardware | PCI Host Device and select the PCI device to pass through:
PCI透传命令行配置:
需要以前的先决条件满足:
物理主机的网卡支持SR-IOV技术
802.1Qbh能力的交换机连接着这个物理网卡
CPU应该支持VT-d或者AMD IOMMU扩展
执行以下步骤:
(1)列出host OS所有的设备
root@kvm:~# virsh nodedev-list --tree
computer
|
+- net_lo_00_00_00_00_00_00
+- net_ovs_system_0a_c6_62_34_19_b4
+- net_virbr1_nic_52_54_00_ba_8e_2c
+- net_vnet0_fe_54_00_55_9b_d6
...
|
+- pci_0000_00_03_0
| |
| +- pci_0000_03_00_0
| | |
| | +- net_eth0_58_20_b1_00_b8_61
| |
| +- pci_0000_03_00_1
| |
| +- net_eth1_58_20_b1_00_b8_61
|
...
(2)列出所有的以太网适配器
root@kvm:~# lspci | grep Ethernet
03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
03:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
(3)获取eth1设备使用的更加详细信息
root@kvm:~# virsh nodedev-dumpxml pci_0000_03_00_1
pci_0000_03_00_1
/sys/devices/pci0000:00/0000:00:03.0/0000:03:00.1
pci_0000_00_03_0
ixgbe
0
3
0
1
82599ES 10-Gigabit SFI/SFP+ Network Connection
Intel Corporation
(4)转换域,总线,插槽和函数的值到一个16进制。
root@kvm:~# printf %x 0
0
root@kvm:~# printf %x 3
3
root@kvm:~# printf %x 0
0
root@kvm:~# printf %x 1
1
(5)创建一个新的libvirt网络定义文件
root@kvm:~# cat passthrough_net.xml
passthrough_net
(6)定义,启动、和开启自动启动libvirt网络
root@kvm:~# virsh net-define passthrough_net.xml
Network passthrough_net defined from passthrough_net.xml
root@kvm:~# virsh net-start passthrough_net
Network passthrough_nett started
root@kvm:~# virsh net-autostart passthrough_net
Network passthrough_net marked as autostarted
root@kvm:~# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
passthrough_net active yes yes
(7)针对KVM虚拟机编辑XML文件
root@kvm:~# virsh edit kvm1
...
...
...
...
Domain kvm1 XML configuration edited.
(8)重启KVM实例
root@kvm:~# virsh destroy kvm1
Domain kvm1 destroyed
root@kvm:~# virsh start kvm1
Domain kvm1 started
(9)列出SR-IOV网络适配器提供的Virtual Functions(VFS)
root@kvm:~# virsh net-dumpxml passthrough_net
passthrough_net
a4233231-d353-a112-3422-3451ac78623a
操作网络 接口:
使用libvrit操作一个新的桥接口,运行以下命令:
(1)创建一个新的桥接口配置文件
root@kvm:~# cat test_bridge.xml
(2)定义新的接口
root@kvm:~# virsh iface-define test_bridge.xml
Interface test_bridge defined from test_bridge.xml
(3)列出所有的接口
root@kvm:~# virsh iface-define test_bridge.xml
Interface test_bridge defined from test_bridge.xml
(4)启动新的桥接口。
root@kvm:~# virsh iface-start test_bridge
Interface test_bridge started
root@kvm:~# virsh iface-list --all | grep test_bridge
test_bridge active 4a:1e:48:e1:e7:de
(5)列出主机上所有的桥设备
root@kvm:~# brctl show
bridge name bridge id STP enabled interfaces
test_bridge 8000.000000000000 no
virbr0 8000.000000000000 yes
virbr1 8000.525400ba8e2c yes virbr1-nic
vnet0
(6)检查桥的网络配置
root@kvm:~# ip a s test_bridge
46: test_bridge: mtu 1500 qdisc noqueue state UNKNOWN group default
link/ether 4a:1e:48:e1:e7:de brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global test_bridge
valid_lft forever preferred_lft forever
inet6 fe80::481e:48ff:fee1:e7de/64 scope link
valid_lft forever preferred_lft forever
(7)获取桥的MAC地址
root@kvm:~# virsh iface-mac test_bridge
4a:1e:48:e1:e7:de
(8)基于MAC地址获取桥的名称
root@kvm:~# virsh iface-name 4a:1e:48:e1:e7:de
test_bridge
(9)停止桥接口。
root@kvm:~# virsh iface-destroy test_bridge
Interface test_bridge destroyed
root@kvm:~# virsh iface-list --all | grep test_bridge
test_bridge inactive
root@kvm:~# virsh iface-undefine test_bridge
Interface test_bridge undefined
root@kvm:~# virsh iface-list --all | grep test_bridge