DPDK安装配置教程及hello world示例详解

DPDK安装教程

1. 硬件配置

选项 属性值
CPU 32 Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
内核版本 Linux host 4.9.0-13-amd64 #1 SMP Debian 4.9.228-1
DPDK版本 dpdk-stable-20.02.1
网卡型号 https://core.dpdk.org/supported/

查看是否支持hpet,如果不支持则无输出内容,需要在BIOS中开启:

grep hpet /proc/timer_list
# ========================output info====================
Clock Event Device: hpet
 set_next_event: hpet_legacy_next_event
 shutdown: hpet_legacy_shutdown
 periodic: hpet_legacy_set_periodic
 oneshot:  hpet_legacy_set_oneshot
 resume:   hpet_legacy_resume

2. 配置系统大页内存

大页内存有临时配置(重启需要重新配置)和永久配置()两种方式。推荐使用永久配置。

Linux系统采用基于hugetlbfs的特殊文件系统加入对2MB或者1GB的大页支持。下面提供两种方式的配置

2.1 hugepages = 1GB的永久配置

将hugepages中的内存给DPDK使用

mkdir /mnt/huge_1GB
mount -t hugetlbfs nodev /mnt/huge_1GB
vim  /etc/fstab
# 在文件中添加以下内容
nodev /mnt/huge_1GB hugetlbfs pagesize=1GB 0 0

修改/etc/default/grub 中的 GRUB_CMDLINE_LINUX,然后运行 grub 更新并重启系统:

vi /etc/default/grub

GRUB_CMDLINE_LINUX配置选项中追加以下内容

default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=1,3,5,7,9,11,13,15,31
  • 设置默认hugepagesz=1G,页面数为16个,即共16G

  • iommu=pt intel_iommu=on:如果要使用VFIO,请使用以下附加的grub参数

  • isolcpus:隔离将用于DPDK的CPU core,配置请按照网卡所在的numa进行相应numa的CPU隔离操作

    # 查看NUMA:
    apt-get install numactl
    numactl --hardware 
    
    # 如果NUMA结果如下:
                        sockets =  [0, 1]
    
                               Socket 0        Socket 1
                               --------        --------
                        Core 0 [0, 16]         [1, 17]
                        Core 1 [2, 18]         [3, 19]
                        Core 2 [4, 20]         [5, 21]
                        Core 3 [6, 22]         [7, 23]
                        Core 4 [8, 24]         [9, 25]
                        Core 5 [10, 26]        [11, 27]
                        Core 6 [12, 28]        [13, 29]
                        Core 7 [14, 30]        [15, 31]
    如果此时用于绑定DPDK的网卡位于Socket 1,那么必须使用于NUMA对应的lcore,否则会降低很大的性能。我们可以选择1,3,5,7,9,11,13,15等lcore来运行dpdk程序,所以在配置isolcpus时可以把使用的lcore隔离起来
    

更新grub

sudo update-grub

重启系统,注意:如果已经加载UIO或绑定网卡后重启,需要再重新加载UIO驱动以及绑定网卡

reboot

查看是否配置成功

cat /proc/meminfo  | grep Huge
# ========================output ino====================
home/dingfuxiao# cat /proc/meminfo | grep Huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
HugePages_Total:      16
HugePages_Free:       16
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB

2.2 hugepages = 2MB的临时配置

如果系统支持1GB的大页,推荐使用1GB大页。以下为临时配置2M*1024大页内存的方式:

mkdir /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
vim  /etc/fstab
# add below item
nodev /mnt/huge hugetlbfs defaults 0 0
  • 对于非NUMA:echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages

  • NUMA架构:

    echo 1024 >/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
    echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
    

配置完成之后可以使用cat /proc/meminfo查看是否成功。

注意:如果在运行dpdk程序过程中,出现了HugePages_Free为0的现象,导致程序不能运行,可使用以下方法进行解决

# Get the hugepage size.
awk '/Hugepagesize/ {print $2}' /proc/meminfo

# Get the total huge page numbers.
awk '/HugePages_Total/ {print $2} ' /proc/meminfo

# Unmount the hugepages.
umount `awk '/hugetlbfs/ {print $2}' /proc/mounts`

# Create the hugepage mount folder.
mkdir -p /mnt/huge

# Mount to the specific folder.
mount -t hugetlbfs nodev /mnt/huge

# 查看
cat /proc/meminfo

3. dpdk安装

下载DPDK的源码下载,使用的是dpdk-stable-20.02.1:

wget http://fast.dpdk.org/rel/dpdk-20.02.1.tar.gz
tar zxvf dpdk-20.02.1.tar.gz
cd dpdk-stable-20.02.1

安装依赖

apt-get install libnuma-dev

安装编译到 install 目录

make install T=x86_64-native-linuxapp-gcc
# ========================格式描述====================
其中T表示Target, Target的描述格式是:
ARCH-MACHINE-EXECENV-TOOLCHAIN
- ARCH = i686, x86_64, ppc_64
- MACHINE = native, ivshmem, power8
- EXECENV = linuxapp, bsdapp
- TOOLCHAIN = gcc, icc

编译完成之后生产了环境目录x86_64-native-linuxapp-gcc

cd x86_64-native-linuxapp-gcc
ls
# ========================output info====================
app  build  include  kmod  lib  Makefile

加载uio内核模块:

modprobe uio_pci_generic

要使用DPDK,必须将网卡绑定到uio_pci_generi模块,DPDK在tools目录下提供了dpdk-devbind.py脚本完成这个工作。首先在绑定前查看一下状态:

[root:/home/dingfuxiao/dpdk-stable-20.02.1/usertools]$./dpdk-devbind.py --status
# ========================output info====================
Network devices using kernel driver
===================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth3 drv=ixgbe unused=uio_pci_generic 
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth2 drv=ixgbe unused=uio_pci_generic 
0000:06:00.0 'I350 Gigabit Network Connection 1521' if=eth4 drv=igb unused=uio_pci_generic *Active*
0000:06:00.1 'I350 Gigabit Network Connection 1521' if=eth5 drv=igb unused=uio_pci_generic 
0000:82:00.0 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' if=eth1 drv=bnxt_en unused=uio_pci_generic 
0000:82:00.1 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' if=eth1 drv=bnxt_en unused=uio_pci_generic 
Other Network devices
=====================


Crypto devices using DPDK-compatible driver
===========================================


Crypto devices using kernel driver
==================================


Other Crypto devices
====================


Eventdev devices using DPDK-compatible driver
=============================================


Eventdev devices using kernel driver
====================================


Other Eventdev devices
======================


Mempool devices using DPDK-compatible driver
============================================


Mempool devices using kernel driver
===================================


Other Mempool devices
=====================

可以看到现在没有绑定到DPDK的网络接口,现在将eth0和eth1绑定到DPDK:

# 需要先down掉
ifconfig eth0 down
./dpdk-devbind.py --bind=uio_pci_generic eth0
./dpdk-devbind.py --status
ifconfig eth1 down
./dpdk-devbind.py --bind=uio_pci_generic eth1
./dpdk-devbind.py --status
# ========================output info====================
Network devices using DPDK-compatible driver
============================================
0000:82:00.0 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' drv=uio_pci_generic unused=bnxt_en
0000:82:00.1 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' drv=uio_pci_generic unused=bnxt_en

Network devices using kernel driver
===================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth3 drv=ixgbe unused=uio_pci_generic 
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth2 drv=ixgbe unused=uio_pci_generic 
0000:06:00.0 'I350 Gigabit Network Connection 1521' if=eth4 drv=igb unused=uio_pci_generic *Active*
0000:06:00.1 'I350 Gigabit Network Connection 1521' if=eth5 drv=igb unused=uio_pci_generic 

Other Network devices
=====================


Crypto devices using DPDK-compatible driver
===========================================


Crypto devices using kernel driver
==================================


Other Crypto devices
====================


Eventdev devices using DPDK-compatible driver
=============================================


Eventdev devices using kernel driver
====================================


Other Eventdev devices
======================


Mempool devices using DPDK-compatible driver
============================================


Mempool devices using kernel driver
===================================


Other Mempool devices
=====================

可以看出,现在eth0和eth1已经使用DPDK了。运行dmesg | tail查看内核模块注册信息:

dmesg | tail
# ========================output info====================
[   40.303396] device-mapper: uevent: version 1.0.3
[   40.303493] device-mapper: ioctl: 4.35.0-ioctl (2016-06-23) initialised: [email protected]
[   46.110282] systemd[1]: apt-daily.timer: Adding 3h 7min 32.471933s random time.
[  115.004038] usb 1-1.4: USB disconnect, device number 3
[ 2833.197700] IPv6: ADDRCONF(NETDEV_UP): eth4: link is not ready
[ 2836.888038] igb 0000:06:00.0 eth4: igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 2836.888221] IPv6: ADDRCONF(NETDEV_CHANGE): eth4: link becomes ready
[ 2883.084037] bnxt_en 0000:82:00.1 eth0: NIC Link is Up, 25000 Mbps full duplex, Flow control: ON - receive & transmit
[ 2885.403482] bnxt_en 0000:82:00.0 eth1: NIC Link is Up, 25000 Mbps full duplex, Flow control: ON - receive & transmit
[ 2990.589091] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

4. hello world

在编译之前必须将RET_SDKRTE_TARGET导入到环境变量中,其中RET_SDK是DPDK的安装目录,RTE_TARGET是DPDK目标环境目录。

export RTE_SDK=/home/dingfuxiao/dpdk-stable-20.02.1/
export RTE_TARGET=x86_64-native-linuxapp-gcc

在examples目录里面编译一个简单的应用:

cd examples/ helloworld/
make
./build/helloworld
# ========================output info====================
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
hello from core 2
hello from core 4
hello from core 6
hello from core 8
hello from core 10
hello from core 12
hello from core 14
hello from core 16
hello from core 17
hello from core 18
hello from core 19
hello from core 20
hello from core 21
hello from core 22
hello from core 23
hello from core 24
hello from core 25
hello from core 26
hello from core 27
hello from core 28
hello from core 29
hello from core 30
hello from core 0

注意:隔离的核并不会输出信息

5. DPDK绑定、解绑网卡

5.1 查看网卡状态

[root:/home/dingfuxiao/dpdk-stable-20.02.1/usertools]$./dpdk-devbind.py -s

Network devices using DPDK-compatible driver
============================================
0000:82:00.0 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' drv=uio_pci_generic unused=bnxt_en
0000:82:00.1 'BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller 16d7' drv=uio_pci_generic unused=bnxt_en

Network devices using kernel driver
===================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=eth3 drv=ixgbe unused=uio_pci_generic
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=rename9 drv=ixgbe unused=uio_pci_generic
0000:06:00.0 'I350 Gigabit Network Connection 1521' if=eth4 drv=igb unused=uio_pci_generic *Active*
0000:06:00.1 'I350 Gigabit Network Connection 1521' if=rename6 drv=igb unused=uio_pci_generic
0000:81:00.0 'Device 159b' if=eth6 drv=ice unused=uio_pci_generic
0000:81:00.1 'Device 159b' if=eth7 drv=ice unused=uio_pci_generic

Other Network devices
=====================
0000:03:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' unused=i40e,uio_pci_generic
0000:03:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' unused=i40e,uio_pci_generic

No 'Baseband' devices detected
==============================

No 'Crypto' devices detected
============================

No 'Eventdev' devices detected
==============================

No 'Mempool' devices detected
=============================

No 'Compress' devices detected
==============================

No 'Misc (rawdev)' devices detected
===================================

可以看到目前BCM57414使用的是UIO驱动,其他三张网卡是使用内核态的驱动

5.2 绑定网卡

示例:绑定intel E810的两个port

  • 方式一:根据eth口绑定 (处于内核态才有eth信息)

    ./dpdk-devbind.py --bind=uio_pci_generic eth6
    ./dpdk-devbind.py --bind=uio_pci_generic eth7
    
  • 方式二:根据网卡表示绑定 (处于任何状态都可)

    ./dpdk-devbind.py -b uio_pci_generic 81:00.0 81:00.1
    

5.3 解绑网卡

./dpdk-devbind.py -u 81:00.0 81:00.1

5.4 把网卡切换到内核驱动

./dpdk-devbind.py -b ice 81:00.0 81:00.1

注意:ice为对应网卡的驱动

你可能感兴趣的:(DPDK安装配置教程及hello world示例详解)