Blogpost

Some of the limitations we hit with DPDK

Discover why we opted for an alternative approach in developing our network probe technology, focusing on comprehensive observability across both physical and cloud networks.

Florian Thebault
October 11, 2022
Share
LinkedIn LogoX logo

Some of our reservations about the DPDK

Our first set of RUST drivers, for Intel 1G, 10G and 40G boards is finished (we'll write a bit more about it, but our benchmarks are really brilliant!) We run them in userspace, like our probe!
A question we were asked quite often during the last FIC (2022) was:

"Why didn't you use the DPDK?",

Understand:

"Apart from a masochistic side, is there a valid reason?

When Intel, inventor of the DPDK, chose us to be one of the 10 startups to be accelerated in their Intel Ignite program in 2022, we were asked the same question over and over again by our interlocutors.
It is now time to express the reservations we have about DPDK in our context and the reasons that motivated us to choose another path.

The answer to this question could be summarized as "we do everything in RUST, we are so used to suffering that developing our own NIC drivers was just another walk in the park, on fire". So, yes, there is definitely an attraction to pain.

On a more serious note, the reality is that when we developed our network probe, with the goal of reaching 100Gbit/s, we found that using the DPDK caused many problems. Especially when our goal was to analyze networks for cyber security purposes.

The DPDK is a rather revolutionary open-source technology. It was initially developed, then proposed, by Intel for Linux environments. Its objective is to provide Data Plane libraries and drivers to manage network cards.

For those who are not familiar with the DPDK, this technology appeared in 2010 and came to dust off the existing Linux network stack, which was aging and really not very efficient.

How does DPDK works ?

At startup, a network software calls the kernel to configure a packet acquisition method. Several technologies exist, the most popular of which are certainly AF_Packet (native in Linux), AF_XDP, DPDK, or Netmap.

With DPDK, the kernel will be bypassed entirely by default and a different driver will be used, in charge of the management of the network card. This has allowed the DPDK to quickly establish itself as a replacement for the traditional Kernel/Linux networking stack.

The DPDK uses several strategies to achieve maximum performance. The DPDK's allocation of dedicated CPU cores guarantees better performance of "network applications" (in terms of processing millions of packets per second). The "hugepages" ensure a more uniform access to memory resources (memory pool) in time, which reduces the number of "TLB misses" (translation lookaside buffer). There are of course many other optimization techniques (we can mention the recovery of packets by batch, or the bypassing of the kernel which also induces a reduction of copy/recovery operations at the buffer level).

In general, the reduced load on the processor often allows to obtain interesting performances. And as Intel points out,

"The DPDK removes some of the bottlenecks involved in packet processing. By moving memory buffers into user space and performing polling rather than interrupt-based processing, the DPDK is able to improve performance with small packet sizes." R-IOV for NFV Solutions - Practical Considerations and Thoughts Technical Brief - Authors: Patrick Kutch Cloud Solutions Architect Solutions Enabling Team, Intel Brian Johnson Solutions Architect Networking Division, Intel).

Today, in a hypervisor environment, thanks to the use of the DPDK, Open vSwitch (OVS) and Virtual Function Management, it is possible to benefit from several interesting optimizations, especially in the case of inter-virtual machine packet transmission (i.e. VMs on a single physical machine).

So, why do you punish yourselves, might you ask ?

During our experiments, we noticed that the DPDK also had certain limitations that heavily impacted the performance of solutions based on protocol analysis. Among these limitations, we have identified 4 main ones that made us decide not to use this technology for our solution:

  • compiling/maintainance challenges,
  • configuration/deployement difficulties,
  • absence (to date) of RUST compatibility,
  • framework which imposes to have its own threads.

The latter implies that it cannot be used only as a package reading backend. A solution using DPDK can only use DPDK, but we wanted to have a broader compatibility.

These are not the only constraints or limitations we encountered, although they were decisive in our abandonment of this technology. Among the other points that irritated us, we could mention the following examples.

DPDK is a complex framework to maintain because of its legacy systems, which makes it even more difficult to adapt for complex network environments.

When DPDK is updated, there is a loss of compatibility with previously compatible hardware.

The implementation at the level of the network interface cards is complex.

DPDK manages its own threads, which are externalized from the kernel. It therefore has its own scheduler. And as it is initialized at boot time, it is not possible to make changes except by rebooting the whole system. This has an impact on the dynamic scalability of any solution built on DPDK.

In research and development work, we have never been able to maintain 100Gbit/s throughput (148,800,000 packets per second) with small packet sizes (64 bytes) with DPDK despite announcements made on this performance by the community. We tried this on several different network cards.

The network card drivers implemented in DPDK have their own threads (of CPU cores), which sometimes leads, depending on the use case, to call-back systems that can cause a significant overhead on processor resources when registering packets in PCAP.

And finally, we have noticed a plateau when certain DPDK-based solutions must use more than 30 cores.

We’d like to stress that these points may however evolve, DPDK being regularly updated, some may be resolved since the last we tried DPDK implementation.

NANO Corp's goal is unified observability, i.e. the ability to monitor both physical and cloud networks with the same platform. The development of our cloud probes (spoiler alert: big announcement coming soon) required us to anticipate the potential impacts of DPDK in these environments.

Although we have not experienced this due to other development choices, in virtualized environments it has also been noted that the DPDK may have difficulty making packet handling scalable. As stated here:

"OVS-DPDK performs very similarly in our test environments, scaling linearly for initial VNFs, but then reaching a plateau in system throughput as more VNFs are added. We show that this plateau is a function of the CPU resources allocated to the virtual switching functions of OVS-DPDK." Pitaev et al., 2018 Characterizing the Performance of Concurrent Virtualized Network Functions with OVS-DPDK, FD(.)IO VPP and SR-IOV.

Thus, we were concerned that the more virtual machines the hypervisor had to manage, the faster the protocol analysis solution would be saturated. This hypothesis was confirmed during our research. A saturation that it was not relevant to try to circumvent until this technology had evolved.

It is in the light of all these reservations that we decided to bypass the DPDK and to redevelop a solution working entirely in userspace, including NIC drivers. Article to come on the subject.

If you have encountered similar problems or if you do not share this analysis, our researchers and in-house developers will be happy to discuss with you on the subject.

Please reach out, we’d be happy to discuss!

Florian Thebault
October 11, 2022
Share
LinkedIn LogoX logo

Ready to unlock
full network visibility?

More blog posts

Go to the blog