intel dpdk 在虚拟机 VMware 中安装部署

声明:此文档只做学习交流使用,请勿用作其他商业用途

author:朝阳_tony
E-mail : [email protected]
Create Date: 2013-12-28 23:38:47 Saturday
Last Change: 2014-1-1 22:33:42 Wednesday

转载请注明出处:http://blog.csdn.net/linzhaolover

intel DPDK交流群希望大家加入互相学习,QQ群号:289784125


此文请结合intel dpdk源码去阅读,基于dpdk-1.5.1 版本源码讲解,源码可以去http://dpdk.org/dev 网页中下载;更多官方文档请访问http://dpdk.org/

假如你没有intel的网卡,没有相应的linux系统,只是想简单的使用了解一下dpdk,那么你可以选择在vmware中部署一套简单的dpdk环境;


1、在vmware中安装配置适合dpdk运行的虚拟机;

1)、虚拟机的配置要求,

vcpu = 2   最少两个cpu,因为dpdk是需要绑定core,一个是没办正常运行dpdk的,如你电脑运行,最好多配置几个;

memory=1024   也就是1G ,当然越多越好,因为要配置hugepage,还是多分点吧;

系统,我装的是rhel6.1 ,当然你可以选择更高版本,但不能选择低版本,怕不支持;http://blog.csdn.net/linzhaolover/article/details/8223568  这有 RHEL6.3 6.4 6.5的下载地址;

系统的在装好后要更新一下kernel,我目前虚拟机里使用的是 linux-3.3.2,你最好选择3.0 至3.8之间的,这之间的kernel有些人用过,是可以跑起dpdk的;

网卡, 给两个吧, 

vmware装虚拟机系统我在这就不多说了,网上有很多的教程;

2)、添加dpdk支持的网卡

同学们虚拟网卡,大家就不要吝啬了,至少添加两块intel 网卡吧;因为一块会报错误;

dpdk是intel出的,目前似乎只支持intel的网卡,在装好虚拟机好,我们看一下当前虚拟机的网卡是什么样的;用lspci命令查看;

# lspci  | grep Ethernet
02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
vmware安装虚拟机,默认的网卡是amd的,我们该怎样正确添加intel的网卡呢??????

好吧,先将虚拟机shutdown;

再添加一块网卡,这时别急,先不启动虚拟机;我们还需要去修改一下当前虚拟机的配置文件,

我的配置文件时在E:\Users\adm\Documents\Virtual Machines\Red Hat Enterprise Linux 5\Red Hat 6.vmx 

你在安装虚拟机时,应该选择了其工作目录,自己将鼠标放在VMware左侧栏你创建的虚拟机名字处,就会自动显示它的工作目录的了;

用记事本打开配置文件,然后添加一行

ethernet2.virtualDev = "e1000"
由于我的是添加的第3块网卡了,如果从0开始数,刚好是eth2,添加后的样子是

ethernet2.virtualDev = "e1000"
ethernet2.present = "TRUE"


e1000是intel的网卡中的一个千兆网卡;

好了,在重新启动虚拟机,查看一下网卡,多了一个82545em 的网卡,

# lspci  | grep Ethernet
02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
02:06.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)

2、部署dpdk

1)、下载源码

在开启虚拟机后,从dpdk官网下载最新的code

git clone git://dpdk.org/dpdk

2)、设置环境变量

进入dpdk目录;编辑一个环境变量文件,然后source;

export RTE_SDK=`pwd`
#export RTE_TARGET=x86_64-default-linuxapp-gcc
export RTE_TARGET=i686-default-linuxapp-gcc

由于我的是32虚拟机,所以我选择i686,将x86_64那行环境变量注释掉;

我将上面3行放在dpdkrc文件中,然后用source启用这几个环境变量; 

source  dpkdrc
注意,你以后如果从新登陆终端,进入这个目录,都要source一下这个文件,才能正常运行dpdk的程序;

3)、用dpdk的脚本运行dpdk;

运行脚本进行dpdk测试; 

然后再运行脚本

 ./tools/setup.sh

----------------------------------------------------------
 Step 1: Select the DPDK environment to build
----------------------------------------------------------
[1] i686-default-linuxapp-gcc
[2] i686-default-linuxapp-icc
[3] x86_64-default-linuxapp-gcc
[4] x86_64-default-linuxapp-icc

选择    1      

我的是32位系统,所以我选择    1   , 采用gcc编译32位源码;如果你是64位虚拟机,请选择  3   

----------------------------------------------------------
 Step 2: Setup linuxapp environment
----------------------------------------------------------
[5] Insert IGB UIO module
[6] Insert KNI module
[7] Setup hugepage mappings for non-NUMA systems
[8] Setup hugepage mappings for NUMA systems
[9] Display current Ethernet device settings
[10] Bind Ethernet device to IGB UIO module

编译ok后,

选择  5   

进行igb_uio.ko驱动的安装,这个驱动在编译后是,在i686-default-linuxapp-gcc/kmod/ 目录中;其实在安装igb_uio.ko之前,脚本先安装了uio模块,uio是一种用户态驱动的实现机制,dpdk有些东西时基于uio实现的;有兴趣的可以了解一下uio的驱动使用 http://blog.csdn.net/wenwuge_topsec/article/details/9628409


选择 7  

设置hugepage, 

Removing currently reserved hugepages
.echo_tmp: line 2: /sys/devices/system/node/node?/hugepages/hugepages-2048kB/nr_hugepages: 没有那个文件或目录
Unmounting /mnt/huge and removing directory

  Input the number of 2MB pages
  Example: to have 128MB of hugepages available, enter '64' to
  reserve 64 * 2MB pages
Number of pages: 64
Reserving hugepages
Creating /mnt/huge and mounting as hugetlbfs

提示没有nr_hugepage文件,我没有理它,暂且不知道起原因;

有让你输入预留内存大小的 我输入的是  64  ,   64  乘以 2M  可以128M 做个简单的测试够了,


选 9  

看一下你当前的设备

Option: 9


Network devices using IGB_UIO driver
====================================
<none>

Network devices using kernel driver
===================================
0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio

Other network devices
=====================
<none>

我有3块虚拟网卡,只有最后一个是intel的网卡,看他已经提示当前的网卡驱动而是 e1000而没有用igb_uio , 接下来就是让你去绑定它;


选择10 

进行 网卡bind

Option: 10


Network devices using IGB_UIO driver
====================================
<none>

Network devices using kernel driver
===================================
0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio

Other network devices
=====================
<none>

Enter PCI address of device to bind to IGB UIO driver: 02:06.0
OK
让你输入pci的地址, 你只要将0000:02:06.0  中的,0000冒号后面的几位输入就行了,  如  02:06.0   记得标点也要输入啊,

注意绑定的时候可以能有个错误的提示如下;

Enter PCI address of device to bind to IGB UIO driver: 02:06.0 02:07.0
Routing table indicates that interface 0000:02:06.0 is active. Not modifying
OK
is active ,可能是你当前的对应的网卡处于up状态,所以你要执行down命令将其关闭;

例如我的网卡是eth2;

ifconfig eth2 down
关闭后再重新执行一下上面的绑定操作;





再选择 9 

就看一下当前的网卡状态;

Option: 9


Network devices using IGB_UIO driver
====================================
0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=igb_uio unused=e1000

Network devices using kernel driver
===================================
0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*

Other network devices
=====================
<none>
看  drv=igb_uio  驱动已经帮过去了,现在intel网卡的去的是igb_uio;


选择  12 

测试一下dpdk程序  

Option: 12


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0x3
由于我的虚拟机只有2个cpu ,所以按照16进制掩码就选择了 0x3  ,回车运行一下试试 ;


再输入  start发一下包

Interactive-mode selected
Configuring Port 0 (socket -1)
Checking link statuses...
Port 0 Link Up - speed 1000 Mbps - full-duplex
Done
testpmd>
testpmd> start

Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained

  io packet forwarding - CRC stripping disabled - packets/burst=16
  nb forwarding cores=1 - nb forwarding ports=1
  RX queues=1 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=8 hthresh=8 wthresh=4
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=36 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0

有警告,是什么意思,????????

2014年1月1日22:32:10 星期三

上面这个错误,经由同样学习dpdk 的同学 frank 解决了,是因为我只添加了1块intel的网卡,你在再添加一块就ok了

输入stop 停止;

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 0             TX-total: 0
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

为什么没有数据包呢?????????     谁知道,告诉我一下;

2014年1月1日22:38:16  星期三

上面没有数据的问题解决了,O(∩_∩)O~,原始是我将网卡模式添加时用的NAT模式,给修改成了HOSTONLY模式,哎,但我还是有疑问,只两个模式有什么重要区别吗????

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 8890           RX-dropped: 0             RX-total: 8890
  TX-packets: 8894           TX-dropped: 0             TX-total: 8894
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 8895           RX-dropped: 0             RX-total: 8895
  TX-packets: 8889           TX-dropped: 0             TX-total: 8889
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 17785          RX-dropped: 0             RX-total: 17785
  TX-packets: 17783          TX-dropped: 0             TX-total: 17783
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++




你可能感兴趣的:(api,Intel,dpdk)