Hands on DPDK OVS

安装必要依赖

apt-get update

apt-get install -y make

apt-get install -y gcc

apt-get install -y libssl

apt-get install -y libcap-ng0

apt-get install -y libtool

apt-get install -y autoconf

apt-get install -y qemu

apt-get install -y automake pciutils hwloc numactl

apt-get install -y libnuma-dev

apt-get install libpcap0.8-dev

apt-get install openssl

apt-get install -y python-pip

pip install --upgrade pip 如果这一步出现错误“ImportError: cannot import name main”, 修改/usr/bin/pip中最后三句代码为

from pip import __main__

if __name__ == '__main__':

    sys.exit(__main__._main())

pip install six

安装编译DPDK

# mkdir -p dpdk 

# wget http://fast.dpdk.org/rel/dpdk-17.08.1.tar.xz

# tar xf dpdk-17.08.1.tar.xz

# export DPDK_DIR= $YOUR_FOLDER_THAT_KEEP_DPDK

# export DPDK_TARGET=x86_64-native-linuxapp-gcc

# export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET

# make install T=$DPDK_TARGET DESTDIR=install

安装,编译运行OVS

# mkdir -p ovs

# git clone https://github.com/openvswitch/ovs.git

# git checkout branch-2.8

#./boot.sh

#./configure --with-dpdk=$DPDK_BUILD

# make -j 10 && make install

# mkdir -p /usr/local/etc/openvswitch

# mkdir -p /usr/local/var/run/openvswitch

# cd $OVS_DIR

# sudo ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db ./vswitchd/vswitch.ovsschema

# sudo ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach

# sudo ./utilities/ovs-vsctl --no-wait init

大页

对于64位的应用,在系统支持的情况下推荐使用1G大页内存,对于run time下的1G大页内存分配,内核需要支持continuous memory allocator(CONFIG_CMA)

# cat /proc/cpuinfo | grep pse /*The hugepage sizes that a CPU supports can be determined from the CPU flags on Intel architecture. If pse exists, 2M hugepages are supported; if pdpe1gb exists, 1G hugepages are supported. On IBM Power architecture, the supported hugepage sizes are 16MB and 16GB*/

# mkdir -p /mnt/huge

# mkdir -p /mnt/huge_2mb

# mount -t hugetlbfs none /mnt/huge

# mount -t hugetlbfs none /mnt/huge_2mb -opagesize=2MB

修改/etc/default/grub文件,在里面加入

GRUB_CMDLINE_LINUX_DEFAULT="default_hugepagesz=1G hugepagesz=2M hugepages=1024 iommu=pt intel_iommu=on isolcpus=2-7"

其中hugepages指定系统预留多大内存用作大页使用,hugepagesz指定大页大小,这里我们是2M,可能的话用1G。

isocpus用作CPU隔离,为了说明针对OVS DPDK实验中CPU调优策略,以一个24核使能超线程的CPU系统做示例进行说明。可以看到除了isocpus,还有DPDK特定的CPU核绑定和操作选项。


Hands on DPDK OVS_第1张图片
CPU策略

当前实验用的系统CPU信息如下,将isolcpus配置成为2-7意味着仅将CPU核0,1用作系统调度。

Socket 0: 0 2 4 6

Socket 1: 1 3 5 7

===== CPU Info Summary =====

Logical processors: 8

Physical socket: 2

Siblings in one socket:  4

Cores in one socket:  4

Cores in total: 8

Hyper-Threading: off

配置完后执行:

# update-grub

# reboot

OVS DPDK的其他配置

# dmesg | grep -e DMAR -e IOMMU /*make sure IOMMU supported*/

[ 0.000000] DMAR: IOMMU enabled

[ 3.976840] AMD IOMMUv2 driver by Joerg Roedel

[ 3.976841] AMD IOMMUv2 functionality not available on this system

# modprobe vfio-pci

# chmod a+x /dev/vfio

# chmod 0666 /dev/vfio/*

# modprobe openvswitch

# cd $OVS_DIR

# ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

# mount -t hugetlbfs none /mnt/huge

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-hugepage-dir="/mnt/hugepages"

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,0"

core 2用作DPDK非data path的处理

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x4"

core 4,5用作DPDK data path的处理

# ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0x30"

# ./vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log

创建网桥及端口

# ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev

# ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser   /*it will create associated socker under default path/usr/local/var/run/openvswitch/vhost-user1*/

# ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

# chmod 777/usr/local/var/run/openvswitch/vhost-user*

创建虚机

1. 直接通过命令创建

如果实验用的环境是nest虚拟化(即虚拟化嵌套的情况),确认下hypervisor是否支持KVM

# egrep '^flags.*(vmx|svm)' /proc/cpuinfo

正常情况你会看到图示输出

执行如下命令(斜体标识):

qemu-system-x86_64 -m 4096 -smp 4 -cpu host-hda /home/set/ubuntu-16.04-server-cloudimg-amd64-disk1.img -boot c -enable-kvm -no-reboot -net none -nographic \

-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \

-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \

-device virtio-net-pci,mac=00:00:00:00:00:11,netdev=mynet1 \

-object memory-backend-file,id=mem,size=4G,mem-path=/mnt/hugepages,share=on

-numa node,memdev=mem -mem-prealloc \

-virtfs local,mount_tag=host0,security_model=none,id=vm1_dev

Explanations for command above:

Block device options:

-fda/-fdb file  use 'file' as floppy disk 0/1 image

-hda/-hdb file  use 'file' as IDE hard disk 0/1 image

-hdc/-hdd file  use 'file' as IDE hard disk 2/3 image

Character device options:

-chardevsocket,id=id,path=path[,server][,nowait][,telnet][,reconnect=seconds][,mux=on|off](unix)

Network options:

-netdevvhost-user,id=str,chardev=dev[,vhostforce=on|off]

                configurea vhost-user network, backed by a chardev 'dev'

-device driver[,prop[=value][,...]]

                add device(based on driver)

               prop=value,... sets driver properties

                use'-device help' to print all possible drivers

                use'-device driver,help' to print all possible properties

Standard options:

-mem-path FILE  provide backing storage for guest RAM

-mem-prealloc   preallocate guest memory (use with-mem-path)

如果机器重启,记得重新挂载相应的模块和启动相应的进程

# modprobe vfio-pci

# chmod a+x /dev/vfio

# chmod 0666 /dev/vfio/*

# modprobe openvswitch

# ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

# mount -t hugetlbfs none /mnt/huge

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-hugepage-dir="/mnt/hugepages"

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,0"

# ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask="0x4"

# ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask="0x30"

# ./vswitchd/ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach --log-file=/var/log/openvswitch/ovs-vswitchd.log

2. 通过XML创建(推荐)

# virsh create testvm1.xml

虚机被拉起后,配置其中一块NIC挂接在virbr0上用作外网访问,另外一块挂接在vhost bridge上用作虚机间互通及流量测试,注意添加对应的默认路由。

auto ens2

iface ens2 inet static

address 192.168.1.100

netmask 255.255.255.0

gateway 192.168.1.1

auto ens3

iface ens3 inet dhcp

Hands on DPDK OVS_第2张图片
device->interface部分内容


测试拓扑

你可能感兴趣的:(Hands on DPDK OVS)