参考上一篇文章,这里做回顾以及整理:
使用vmware虚拟机(16.04)+dpdk(dpdk-19.08.2.tar)进行测试
增加处理器,增加内存(方便设置多队列网卡,设置大内存页)
修改配置文件后要重启网络服务的,才能生效:sudo service networking restart
启动后查看多队列网卡(这里网卡没有改为传统的eth命名,重启后网卡的命名会变化,由ens33变成的ens160):
重新修改一下,使多个网卡都生效:
修改后多网卡:(参考第二步,重新修改/etc/network/interfaces文件)
多队列网卡:
default_hugepages=1G hugepagesz=2M hugepages=1024 isolcpus=0-2
如果报错缺少numa.h 需要安装libnuma-dev
这里我用的39,生成64为的linux上的dpdk,生成一个x86_64-native-linux-gcc目录。
# export RTE_SDK=/home/hlp/dpdk/dpdk-stable-19.08.2
# export RTE_TARGET=x86_64-native-linux-gcc
会失败,关闭要绑定的网卡,重新进行绑定: 例如这里我关闭了ens160多队列网卡: sudo ifconfig ens160 down
会有对应的网卡提示,输入要绑定的网卡的前面的id就好,这里我已经绑定过:
一定要关闭网卡后,执行49的网卡绑定动作,可以重新执行49查看执行成功。
Enter hex bitmask of cores to execute testpmd app on
Example: to execute app on cores 0 to 7, enter 0xff
#这里选择 7
bitmask: 7
Launching app
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Port 0: 00:0C:29:EE:2E:64
Checking link statuses...
Done
# 这里可以用命令进行相关操作 查看相关信息
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 00:0C:29:EE:2E:64
Device name: 0000:03:00.0
Driver name: net_vmxnet3
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
Supported RSS offload flow types:
ipv4
ipv4-tcp
ipv6
ipv6-tcp
Minimum size of RX buffer: 1646
Maximum configurable length of RX packet: 16384
Current number of RX queues: 1
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 8
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 512
TXDs number alignment: 1
Max segment number per packet: 255
Max segment number per MTU/TSO: 16
#quit退出
testpmd> quit
直接去测试模块下可以用make进行编译。
root@hlp:/home/hlp/dpdk/dpdk-stable-19.08.2/examples/helloworld# ./build/helloworld -l 0-7 -n 8
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15ad:7b0 net_vmxnet3
hello from core 1
hello from core 2
hello from core 3
hello from core 4
hello from core 5
hello from core 6
hello from core 7
hello from core 0
#kni测试
./build/kni -l 4-7 -n 4 -- -P -p 0x3 -m --config="(0, 4, 6),(1, 5, 7)"
#l3fwd测试
./build/l3fwd -l 4-7 -n 4 -- -p 0x3 --config="(0,0,4),(1,0,5)" --parse-ptype
#会报错 port 1 is not present on the board