Optimizing Virtio Performance in DPDK

First things first, what exactly is going on here. Virtio is a virtualization-specific protocol that allows your operating system to communicate with its hypervisor more efficiently than traditional I/O methods. It does this by offloading certain tasks from the CPU and onto specialized hardware in the host machine.

But, as you may have guessed, there’s always room for improvement! That’s where DPDK comes in it stands for Data Plane Development Kit, and it’s a set of libraries that allow us to optimize Virtio performance even further by reducing CPU overhead and increasing throughput.

So how do we go about doing this? Well, let me show you an example!

First, make sure your kernel is configured with DPDK support:

#!/bin/bash

# This script is used to configure the kernel with DPDK support for optimizing Virtio performance.

# First, we need to check if the kernel is configured with DPDK support by checking the CONFIG_RING_GENERIC option.
# We use the "echo" command to print the value of CONFIG_RING_GENERIC to the /etc/sysconfig/kernel file.
# The "-e" option enables interpretation of backslash escapes, allowing us to use special characters like "\n" for new line.
# The "-n" option prevents the trailing newline character from being added to the output.
# The ">>" operator appends the output to the end of the specified file.
echo -en "CONFIG_RING_GENERIC=y\n" >> /etc/sysconfig/kernel

# Next, we need to update the kernel with the new configuration.
# We use the "grubby" command to update the kernel with the new configuration.
# The "--update-kernel" option specifies the kernel to be updated.
# The "ALL" keyword updates all kernels.
# The "rdev.rdsearch=1" option enables the search for the root device.
# The "rd.lvm.lv=vg0/root" option specifies the root logical volume.
# The "rd.lvm.lv=vg0/swap" option specifies the swap logical volume.
# The "rhgb" option enables the Red Hat graphical boot screen.
# The "quiet" option suppresses unnecessary boot messages.
grubby --update-kernel ALL --args="rdev.rdsearch=1 rd.lvm.lv=vg0/root rd.lvm.lv=vg0/swap rhgb quiet"

Next, install DPDK and its dependencies:

# This script installs DPDK and its dependencies using the yum package manager.

# Install the dpdk-dkms package, which provides the Dynamic Kernel Module Support for DPDK.
yum install dpdk-dkms

# Install the e1000-dpdk package, which provides the DPDK driver for Intel 82540EM Gigabit Ethernet Controller.
yum install e1000-dpdk

# Install the i40e-dpdk package, which provides the DPDK driver for Intel XL710/X710/XL710/XL710 Ethernet Controller.
yum install i40e-dpdk

Now that we have everything set up, let’s create a simple Virtio network interface using the `virt-install` command. We’ll use 2 vCPUs and 512MB of memory for our virtual machine:

# This script uses the `virt-install` command to create a virtual machine with a Virtio network interface.
# It sets the virtual machine's memory to 512MB and uses 2 vCPUs.
# The virtual machine's operating system is set to Linux with the RHEL 7.9 variant.
# The network interface is connected to the bridge named "br0" and uses the Virtio model.
# The virtual machine's disk is set to use the image located at /path/to/image.qcow2 with a size of 40GB.

virt-install --name dpdk_vm \ # Sets the name of the virtual machine to "dpdk_vm"
    --memory 512 \ # Sets the virtual machine's memory to 512MB
    --vcpus=2 \ # Sets the number of vCPUs to 2
    --os-type=linux \ # Sets the operating system type to Linux
    --os-variant=rhel7.9 \ # Sets the operating system variant to RHEL 7.9
    --network bridge=br0,model=virtio \ # Connects the virtual machine's network interface to the bridge "br0" using the Virtio model
    --disk path=/path/to/image.qcow2,size=40 # Sets the virtual machine's disk to use the image located at /path/to/image.qcow2 with a size of 40GB

Once our virtual machine is up and running, let’s start optimizing!

First, we need to configure DPDK on both the host and guest machines:

# This line creates a configuration file for DPDK and adds a specific option to it.
echo "options rte_eth devname eth1" > /etc/modprobe.d/rte-dpdk.conf
# This line loads the DPDK module with verbose output.
modprobe -v rte_eth
# This line retrieves the supported transmission frame sizes for the e1000 network interface and saves them to a file.
ethtool -g e1000 2>/dev/null | grep 'Supported' | awk '{print $2}' > /sys/class/net/eth1/device/supported_tx_framesizes

On the guest machine, we need to load DPDK and configure our network interface:

# Load the DPDK module and configure the network interface
# The following commands will load the DPDK module and configure the network interface on the guest machine.

# Load the DPDK module
modprobe -v rte_pci rte_eth devargs=0000:02:00.1
# The "modprobe" command is used to load a kernel module, in this case, the DPDK module. The "-v" flag is used for verbose output, and the "rte_pci" and "rte_eth" are the names of the modules being loaded. The "devargs" option specifies the device address of the network interface to be configured.

# Configure the network interface
ethtool -g e1000 2>/dev/null | grep 'Supported' | awk '{print $2}' > /sys/class/net/vmxnet3/device/supported_tx_framesizes
# The "ethtool" command is used to query and control network interface settings. The "-g" flag is used to get the current settings, and "e1000" is the name of the network interface. The "2>/dev/null" redirects any error messages to the null device, and the output is piped to the "grep" command to filter for the "Supported" line. The "awk" command is then used to print the second column, which contains the supported frame sizes. Finally, the output is redirected to the "supported_tx_framesizes" file in the network interface's device directory. This will configure the network interface to support the specified frame sizes.

Now that we have DPDK configured on both machines, let’s test our network performance!

First, let’s run a simple iperf3 benchmark between the host and guest machines:

# This script runs a simple iperf3 benchmark between the host and guest machines.
# The -c flag specifies the client mode and <guest_ip> is the IP address of the guest machine.
# The -t flag specifies the duration of the test, in this case 10 seconds.
# The -p flag specifies the port to use for the test, in this case port 5201.

# To run the script, type the following command in the terminal:
# $ bash iperf_test.sh



bash
#!/bin/bash
# This line specifies the interpreter to use for the script.

# This script runs a simple iperf3 benchmark between the host and guest machines.
# The -c flag specifies the client mode and <guest_ip> is the IP address of the guest machine.
# The -t flag specifies the duration of the test, in this case 10 seconds.
# The -p flag specifies the port to use for the test, in this case port 5201.

# To run the script, type the following command in the terminal:
# $ bash iperf_test.sh

iperf3 -c <guest_ip> -t 10 -p 5201
# This line runs the iperf3 command with the specified options.
# The output of the command will show the network performance between the host and guest machines.

And on the guest machine:

bash
# This script uses the iperf3 command to start a server on port 5201
# The server will listen for incoming connections and perform network throughput tests

# Start the server on port 5201
iperf3 -s -p 5201

You should see some pretty impressive results! But, if you’re like me and want to push it even further, let’s enable RSS (Receive Side Scaling) on our network interface:

# This line uses the ethtool command to configure the network interface.
# The -G flag specifies the operation to be performed, in this case, setting the ring parameters.
# The e1000 argument specifies the network interface to be configured.
# The rx and tx arguments specify the receive and transmit ring parameters respectively.
# The value 256 is used for both the rx and tx parameters.
# This value can be adjusted based on the specific network interface and requirements.
ethtool -G e1000 rx 256 tx 256

And that’s it! You should see an even bigger improvement in your Virtio performance.

And the best part? It’s totally worth it for those of us who need our virtual machines to run like lightning!

SICORPS