Setting up your environment
Creating a VM
1. **Download the latest Fedora-Cloud-Base image** - The website for Fedora is http://fedora.inode.at/releases/. If the download link below is not available, please refer to this website and get a new one:
````
[user@host ~]# wget http://fedora.inode.at//32/Cloud/x86_64/images/Fedora-Cloud-Base-32-1.6.x86_64.qcow2
````
2. **Prepare the qcow2 image** - Use the cleaning command install the related package and change the password to your own:
````
[user@host ~]# sudo virt-sysprep --root-password password:changeme --uninstall cloud-init --network --install ethtool,pciutils,kernel-modules-internal --selinux-relabel -a Fedora-Cloud-Base-32-1.6.x86_64.qcow2
````
Compile the kernel
For some linux distros, the vDPA kernel framework was not enabled by default, we need to download the linux source code and compile it. Below are the steps to compile the linux kernel. (The kernel version should be 5.4 or above)
You can get more information about how to compile linux kernels from how-to-compile-linux-kernel. You can skip this part if you are familiar with it.
````
[user@host ~]# git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
[user@host ~]# cd linux
[user@host ~]# git checkout v5.8
[user@host ~]# cp /boot/config-$(uname -r) .config
[user@host ~]# vim .config
(please refer to **Note1** to enable vDPA kernel framework)
[user@host ~]# make -j
[user@host ~]# sudo make modules_install
[user@host ~]# sudo make install
````
And reboot to the newly installed kernel.
**Note1**: We need to set the following lines in the **.config** file to enable vDPA kernel framework:
CONFIG_VIRTIO_VDPA=m
CONFIG_VDPA=y
CONFIG_VDPA_SIM=m
CONFIG_VHOST_VDPA=m
**Note2**: If you prefer to use `make menuconfig`, you can find these options under section `Device Drivers`, named `vDPA drivers`, `Virtio drivers` and `VHOST drivers` as kernel 5.8.
Compile and install the QEMU
1. **Download the qemu code** - The support for vhost-vdpa backend was merged in v5.1.0 thus you should use that version or above:
````
[user@host ~]# git clone https://github.com/qemu/qemu.git
````
2. **Compile the qemu** - Follow the commands below (see the README.rst in qemu for more information):
````
[user@host ~]# cd qemu/
[user@host ~]# git checkout v5.1.0-rc3
[user@host ~]# mkdir build
[user@host ~]# cd build/
[user@host ~]#../configure --enable-vhost-vdpa --target-list=x86_64-softmmu
[user@host ~]# make
````
Configuring the host
1. **Load all the related kmod** - You should now load the kmod with the following commands:
````
[user@host ~]# modprobe vdpa
[user@host ~]# modprobe vhost_vdpa
[user@host ~]# modprobe vdpa_sim
````
2. **Verify vhost_vdpa is loaded correctly** - Ensure the bus driver is indeed vhost_vdpa using the following commands:
````
[user@host ~]# ls -l /sys/bus/vdpa/drivers
drwxr-xr-x 2 root root 0 Sep 5 16:54 vhost_vdpa
#ls -l /sys/bus/vdpa/devices/vdpa0/
total 0
lrwxrwxrwx. 1 root root 0 Sep 21 12:24 driver -> ../../bus/vdpa/drivers/vhost_vdpa
drwxr-xr-x. 2 root root 0 Sep 21 12:25 power
lrwxrwxrwx. 1 root root 0 Sep 21 12:25 subsystem -> ../../bus/vdpa
-rw-r--r--. 1 root root 4096 Sep 21 12:24 uevent
drwxr-xr-x. 3 root root 0 Sep 21 12:25 vhost-vdpa-0
[user@host ~]# ls -l /dev/ |grep vdpa
crw------- 1 root root 239, 0 Sep 5 16:54 vhost-vdpa-0
````
3. **Set the ulimit -l unlimited** - ulimit -l means the maximum size that may be locked into memory. In this case we need to set it to **unlimited**, since vhost-vDPA needs to lock pages for making sure the hardware DMA works correctly:
````
[user@host ~]# ulimit -l unlimited
````
if you forget to set this, you may get the error message from qemu as following:
````
qemu-system-x86_64: failed to write, fd=12, errno=14 (Bad address)
qemu-system-x86_64: vhost vdpa map fail!
qemu-system-x86_64: vhost-vdpa: DMA mapping failed, unable to continue
````
4. **Launch the guest VM** -The device **/dev/vhost-vdpa-0** is the vDPA device we can use. The following is a simple example of using QEMU to launch a VM with vhost_vdpa:
````
[user@host ~]# sudo x86_64-softmmu/qemu-system-x86_64 \
-hda Fedora-Cloud-Base-32-1.6.x86_64.qcow2 \
-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa1 \
-device virtio-net-pci,netdev=vhost-vdpa1,mac=00:e8:ca:33:ba:05,\
disable-modern=off,page-per-vq=on \
-enable-kvm \
-nographic \
-m 2G \
-cpu host \
2>&1 | tee vm.log
````
5. **Verify the port was created successfully** - After the guest boots up, we can then login in to it. We should verify that the port has been bounded successfully to the virtio driver. This is since the backend of the virtio_net driver is vhost_vdpa and if something went wrong the driver will not be created.
````
guest login: root
Password:
Last login: Tue Sep 29 12:03:03 on ttyS0
[root@guest ~]# ip -s link show eth0
2: eth0:
link/ether fa:46:73:e7:7d:78 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
12006996 200043 0 0 0 0
TX: bytes packets errors dropped carrier collsns
12006996 200043 0 0 0 0
[root@guest ~]# ethtool -i eth0
**driver: virtio_net**
version: 1.0.0
firmware-version:
expansion-rom-version:
bus-info: 0000:00:04.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
[user@guest ~]# lspci |grep 00:04.0
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device
````
Running traffic with vhost_vdpa
Now that we have a vdpa_sim port with the loopback function. we can generate traffic using **pktgen** to verify it. Pktgen is a packet generator integrated with Linux kernel. For more information please refer to the pktgen doc and sample test script.
````
[root@guest ~]# ip -s link show eth0
2: eth0:
link/ether fa:46:73:e7:7d:78 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
18013078 300081 0 0 0 0
TX: bytes packets errors dropped carrier collsns
18013078 300081 0 0 0 0
[root@guest ~]# ./pktgen_sample01_simple.sh -i eth0 -m 00:e8:ca:33:ba:05
(you can get this script from sample test script)
[root@guest ~]# ip -s link show eth0
2: eth0:
link/ether fa:46:73:e7:7d:78 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
24013078 400081 0 0 0 0
TX: bytes packets errors dropped carrier collsns
24013078 400081 0 0 0 0
````
You can see the **RX packets** are increasing together with **TX packets** which means the vdpa_sim is working as expected.
Use Case B: Experimenting with virtio_vdpa bus driver
Overview of the datapath
Virtio_vdpa driver is a transport implementation for kernel virtio drivers on top of vDPA bus operations. Virtio_vdpa will create a virtio device in the virtio bus. For more information on the virtio_vdpa bus see here.
The following diagram shows the datapath for virtio-vDPA using a vdpa_sim (the vDPA simulator device):