Blogpost

Bandwidth explosion: what power will be needed to supervise tomorrow's networks?

Here's a glimpse into one of NANO Corp's core values, the very foundation upon which we built our company. We're thrilled to share it with you.

Florian Thebault
November 22, 2022
Share
LinkedIn LogoX logo

#Network supervision: issues have never been more important

Network supervision meets two main objectives: cybersecurity and quality of service monitoring (both in terms of integrity and performance). One technology capable of monitoring all this, is protocol analysis, the art of glancing at network packets. This technology, based on network probes, has existed since the early 2000s, when bandwidth reached a maximum of 1 Gbit/s at the maximum. Like some universities or government businesses (excluding data centers or ISPs)

Speeds have increased almost exponentially

In the professional field, customer bandwidth steadily increased from 1 Gbit/s in 2000, to 10 Gbit/s in 2007 and then to 40 Gbit/s in 2010;

For individuals, internet speed has increased from 56k to 512k, then quickly moved to ADSL (between 1 Mbit/s and 20 Mbit/s) and finally to fiber (between 200 Mbit and now 1 Gbit. Even 8Gbit/s in France !!!).

However, the growth of computing power necessary to supervise networks has not kept up with the same rhythm as that of the bandwidth (linear growth) leading to a form of dropout.

How ​​protocol analysis kept up with increased throughput over the past 20 years?

Multiplying probes could be an obvious solution to keep up with the explosion in flow rates. But, guaranteeing a correct analysis of all network traffic supposes that all the packets of the same session are processed by the same processing unit.

In order to guarantee this unity of sessions, 3 types of solutions have been developed:

  • Parallelizing processes, which requires developing new parallelization chips (like ASIC or FPGA) every time throughput increases significantly. And to adapt solutions to specific codes designed to control those chips. This makes it impossible to upgrade capabilities without changing hardware.
  • Increasing computing power, which requires developing new chips with each increase in throughput. This makes it impossible to upgrade capabilities without changing hardware.
  • Multiplying probes in the network which, limited to certain specific cases, is subject to location constraints on the network and can lead to loss of information.

For the last two decades, R&D work has mainly focused on the development of hyper-specialized hardware (FPGA, ASICS, etc.) aimed at increasing the performance of probes.

Each increase in bandwidth can only be caught up by means of major technological innovations developed over time, which explains a delay that is both chronic and “mechanical” between “old generation” probes and networks with ever denser and faster speed.

As a consequence of this structural shift, the different solutions adopted by current manufacturers led to two major drawbacks. Either costs exploded due to investments in hardware development to get more efficient probes, or infrastructure footprint exploded due to the multiplication probes in the network.

Expanding footprint or increasing cost for probes, in order to adapt to evolving bandwidth, has long remained a default solution, though. Which is now coming up against the arrival of 100G. And it’s become a sore point.

100G networks: the need to change paradigm

Nowadays, it is neither credible nor viable to believe in the sustainability of protocol analysis through the increase in computing power thanks to custom hardware. It is also neither serious nor viable to relocate network probes far from network cores. It is therefore necessary to rethink both hardware and software to meet the challenges of tomorrow.

Staying in the race requires a change of approach: using, as a base, the parallelization capabilities of conventional equipment. Network card manufacturers have placed parallelization as the standard (allowing excellent scalability). As a result, probe performance should no longer be based solely on hardware performance, but also on software performance.

An approach based on a hardware-independent library offers greater freedom vis-à-vis suppliers and guarantees quality software.

Monitoring and protecting your networks : NANO’s way

Our network experience and our certainty in their foreseeable growth in speeds, from 100 Gbit/s today to 400 Gbit/s tomorrow, have guided us in meeting these challenges.

Years of R&D in the field of parallelization made it possible, both in terms of hardware and software, to develop our core concept : network probes offering scalability both vertically and horizontally.

Stay tuned for more !

Florian Thebault
November 22, 2022
Share
LinkedIn LogoX logo

Ready to unlock
full network visibility?

More blog posts

Go to the blog