目录
正文
DPDK介绍见:www.dpdk.org
本文介绍的步骤基本适用于dpdk 1.7.0 - dpdk 2.0.0 各版本。只是setup.sh显示的菜单有一些小的不同;
同样的,也适用于ubuntu更高版本(已在ubuntu 12.04+及14.04上验证过)
回到顶部
系统:Ubuntu 12.04.3 LTS 64位, CentOS Linux release 7.0.1406 64位
dpdk: 1.7.0 (下载页)
dpdk 1.7.1 经过试验,发现在这两个系统上都有问题, 运行各示例程序都有以下错误
EAL: Error reading from file descriptor
这个bug已经由dpdk的开发人员修复,patch内容如下:
diff --git a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c index d1ca26e..c46a00f 100644 --- a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c +++ b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c @@ -505,14 +505,11 @@ igbuio_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) } /* fall back to INTX */ case RTE_INTR_MODE_LEGACY: - if (pci_intx_mask_supported(dev)) { - dev_dbg(&dev->dev, "using INTX"); - udev->info.irq_flags = IRQF_SHARED; - udev->info.irq = dev->irq; - udev->mode = RTE_INTR_MODE_LEGACY; - break; - } - dev_notice(&dev->dev, "PCI INTX mask not supported\n"); + dev_dbg(&dev->dev, "using INTX"); + udev->info.irq_flags = IRQF_SHARED; + udev->info.irq = dev->irq; + udev->mode = RTE_INTR_MODE_LEGACY; + break; /* fall back to no IRQ */ case RTE_INTR_MODE_NONE: udev->mode = RTE_INTR_MODE_NONE;
在虚拟机里使用时,打上以上的补丁,或手工修改文件后重新编译即可。
回到顶部
虚拟机软件:VMWare WorkStation 10.0.1 build-1379776
CPU: 2个CPU, 每个CPU2个核心
内存: 1GB+
网卡:intel网卡*2, 用于dpdk试验;另一块网卡用于和宿主系统进行通信
回到顶部
需要安装gcc及其他一些小工具等,默认都有了,没有的话运行sudo apt-get install装一下。dkdk的一些脚本用到了python,也装一下。
首先运行su切换到root权限,root没有开的话使用
sudo passwd root
来开一下。
dpdk提供了一个方便的配置脚本: <dpdk>/tools/setup.sh,通过它可以方便地配置环境。
1) 设置环境变量,这里是linux 64位的配置
export RTE_SDK=<dpdk主目录> export RTE_TARGET=x86_64-native-linuxapp-gcc
2)运行setup.sh,显示如下
------------------------------------------------------------------------------ RTE_SDK exported as /home/hack/dpdk-1.7.0 ------------------------------------------------------------------------------ ---------------------------------------------------------- Step 1: Select the DPDK environment to build ---------------------------------------------------------- [1] i686-native-linuxapp-gcc [2] i686-native-linuxapp-icc [3] x86_64-ivshmem-linuxapp-gcc [4] x86_64-ivshmem-linuxapp-icc [5] x86_64-native-bsdapp-gcc [6] x86_64-native-linuxapp-gcc [7] x86_64-native-linuxapp-icc ---------------------------------------------------------- Step 2: Setup linuxapp environment ---------------------------------------------------------- [8] Insert IGB UIO module [9] Insert VFIO module [10] Insert KNI module [11] Setup hugepage mappings for non-NUMA systems [12] Setup hugepage mappings for NUMA systems [13] Display current Ethernet device settings [14] Bind Ethernet device to IGB UIO module [15] Bind Ethernet device to VFIO module [16] Setup VFIO permissions ---------------------------------------------------------- Step 3: Run test application for linuxapp environment ---------------------------------------------------------- [17] Run test application ($RTE_TARGET/app/test) [18] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd) ---------------------------------------------------------- Step 4: Other tools ---------------------------------------------------------- [19] List hugepage info from /proc/meminfo ---------------------------------------------------------- Step 5: Uninstall and system cleanup ---------------------------------------------------------- [20] Uninstall all targets [21] Unbind NICs from IGB UIO driver [22] Remove IGB UIO module [23] Remove VFIO module [24] Remove KNI module [25] Remove hugepage mappings [26] Exit Script
选择6, 进行编译
3)选择8, 插入igb_uio模块
4)选择11,配置大页内存(非NUMA),选择后会提示你选择页数,输入64,128什么的即可
Removing currently reserved hugepages Unmounting /mnt/huge and removing directory Input the number of 2MB pages Example: to have 128MB of hugepages available, enter '64' to reserve 64 * 2MB pages Number of pages: 128
选择19,可以确认一下大页内存的配置:
AnonHugePages: 0 kB HugePages_Total: 128 HugePages_Free: 128 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB
5)选择14, 绑定dpdk要使用的网卡
Network devices using DPDK-compatible driver ============================================ <none> Network devices using kernel driver =================================== 0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth0 drv=e1000 unused=igb_uio *Active* 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth1 drv=e1000 unused=igb_uio 0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio Other network devices ===================== <none> Enter PCI address of device to bind to IGB UIO driver: 0000:02:06.0
绑定好后,选择13,可以查看当前的网卡配置:
Network devices using DPDK-compatible driver ============================================ 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=igb_uio unused=e1000 0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=igb_uio unused=e1000 Network devices using kernel driver =================================== 0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth0 drv=e1000 unused=igb_uio *Active* Other network devices ===================== <none>
6)选择18, 运行testpmd测试程序
注意,运行这个测试程序,虚拟机最好提供2个网卡用于dpdk。
Enter hex bitmask of cores to execute testpmd app on Example: to execute app on cores 0 to 7, enter 0xff bitmask: f
如果没问题,按回车后会出现以下输出:
Launching app EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 1 on socket 0 EAL: Detected lcore 2 as core 0 on socket 0 EAL: Detected lcore 3 as core 1 on socket 0 EAL: Support maximum 64 logical core(s) by configuration. EAL: Detected 4 lcore(s) EAL: Setting up memory... EAL: Ask a virtual area of 0xf000000 bytes EAL: Virtual area found at 0x7fe828000000 (size = 0xf000000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fe827c00000 (size = 0x200000) EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x7fe827800000 (size = 0x200000) EAL: Ask a virtual area of 0x800000 bytes EAL: Virtual area found at 0x7fe826e00000 (size = 0x800000) EAL: Ask a virtual area of 0x400000 bytes EAL: Virtual area found at 0x7fe826800000 (size = 0x400000) EAL: Requesting 128 pages of size 2MB from socket 0 EAL: TSC frequency is ~3292453 KHz EAL: Master core 0 is ready (tid=37c79800) EAL: Core 3 is ready (tid=24ffc700) EAL: Core 2 is ready (tid=257fd700) EAL: Core 1 is ready (tid=25ffe700) EAL: PCI device 0000:02:01.0 on NUMA socket -1 EAL: probe driver: 8086:100f rte_em_pmd EAL: 0000:02:01.0 not managed by UIO driver, skipping EAL: PCI device 0000:02:06.0 on NUMA socket -1 EAL: probe driver: 8086:100f rte_em_pmd EAL: PCI memory mapped at 0x7fe837c23000 EAL: PCI memory mapped at 0x7fe837c13000 EAL: PCI device 0000:02:07.0 on NUMA socket -1 EAL: probe driver: 8086:100f rte_em_pmd EAL: PCI memory mapped at 0x7fe837bf3000 EAL: PCI memory mapped at 0x7fe837be3000 Interactive-mode selected Configuring Port 0 (socket 0) Port 0: 00:0C:29:14:50:CE Configuring Port 1 (socket 0) Port 1: 00:0C:29:14:50:D8 Checking link statuses... Port 0 Link Up - speed 1000 Mbps - full-duplex Port 1 Link Up - speed 1000 Mbps - full-duplex Done testpmd>
输入start, 开始包转发
testpmd> start io packet forwarding - CRC stripping disabled - packets/burst=32 nb forwarding cores=1 - nb forwarding ports=2 RX queues=1 - RX desc=128 - RX free threshold=0 RX threshold registers: pthresh=8 hthresh=8 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=32 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0
输入stop,停止包转发,这时会显示统计信息
testpmd> stop Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 5544832 RX-dropped: 0 RX-total: 5544832 TX-packets: 5544832 TX-dropped: 0 TX-total: 5544832 ---------------------------------------------------------------------------- ---------------------- Forward statistics for port 1 ---------------------- RX-packets: 5544832 RX-dropped: 0 RX-total: 5544832 TX-packets: 5544832 TX-dropped: 0 TX-total: 5544832 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 11089664 RX-dropped: 0 RX-total: 11089664 TX-packets: 11089664 TX-dropped: 0 TX-total: 11089664 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Done.
最好切换到root权限。
1)编译dpdk
进入dpdk主目录<dpdk>,输入
make install T=x86_64-native-linuxapp-gcc
进行编译
2)配置大页内存(非NUMA)
echo 128 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mkdir /mnt/huge mount -t hugetlbfs nodev /mnt/huge
可以用以下命令查看大页内存状态:
cat /proc/meminfo | grep Huge
3)安装igb_uio驱动
modprobe uio insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
4)绑定网卡
先看一下当前网卡的状态
./tools/dpdk_nic_bind.py --status Network devices using DPDK-compatible driver ============================================ <none> Network devices using kernel driver =================================== 0000:02:01.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth0 drv=e1000 unused=igb_uio *Active* Other network devices ===================== 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' unused=e1000,igb_uio 0000:02:07.0 '82545EM Gigabit Ethernet Controller (Copper)' unused=e1000,igb_uio
进行绑定:
./tools/dpdk_nic_bind.py -b igb_uio 0000:02:06.0 ./tools/dpdk_nic_bind.py -b igb_uio 0000:02:07.0
如果网卡有接口名,如eth1, eth2, 也可以在-b igb_uio后面使用接口名, 而不使用pci地址。
5) 运行testpmd测试程序
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 2 -- -i
6)编译运行其他示例程序
<dpdk>/examples下面有很多示例程序,这些程序在dpdk编译时,没有被编译。这里以编译helloworld为例,首先要设置环境变量:
export RTE_SDK=<dpdk主目录> export RTE_TARGET=x86_64-native-linuxapp-gcc
之后进入<dpdk>/examples/helloworld,运行make,成功会生成build目录,其中有编译好的helloworld程序。
回到顶部
安装CentOS虚拟机时,如果选择minimal安装,还需要安装其下的基本开发工具集(含gcc,python等)
另外,dpdk提供的dpdk_nic_bind.py脚本中会调用到lspci命令,这个默认没有安装,运行以下命令安装(不安装此工具则无法绑定网卡):
yum install pciutils
ifconfig默认也没有安装,如果想用它,应运行:
yum install net-tools
在CentOS上,要绑定给dpdk使用的网卡在绑定前,可能是活动的(active),应将其禁用,否则无法绑定。禁用的一种方式是运行:
ifconfig eno33554984 down
eno33554984是接口名,如同eth0一样。
在CentOS上使用setup.sh和通过命令编译和配置dpdk的过程与Ubuntu一样,这里就从略了。