Network performance challenges are increasing and visibility will be a more important part of the network infrastructure in future. Is your network visibility ready for what's next? New trends and increased security threats within the network operations would require high-level visibility in order to get more information out of the network. A recent approach is towards self-organizing and self-healing networks.
The major difference would be that, unlike today, network visibility will no longer be a separate part of the network but would be a service on top of the network infrastructure. This transition process will need a while because this service approach only works on programmable networks (SDN) and the existing legacy networks do not support this approach.
The question most people have these days is that would visibility, as a service, only be software based. The answer is that although the future of visibility would be software driven, we would still need hardware. In order to handle a large amount of traffic, the software would require hardware. Therefore, visibility is software based but would run on the standard hardware. Some services can run on the standard L4 hardware. But there are several applications which cannot be supported by L4 silicon.
In order to keep up with the market trend, Cubro has invested in the new advanced platforms which have better performance compared to the normal L4 silicon. We see our products in the future as a part of the network infrastructure. The products would offer advanced, up to L7, functions for the visibility service and switching/steering would be part of the programmable network.
This is the reason that Cubro is currently supporting the Sonic API on all new G5 units. This feature helps to integrate the units in existing networks and to use them as visibility platforms.
Cubro G5: The next generation platform
(EXA32100 & EXA48600 and upcoming products)
The Cubro G5 is the most advanced packet broker platform currently available on the market. The key features of the Cubro G5 platform include:
- High performance switching fabric (L2 - L7)
- High-performance host controller with SSD connection
- Open Unix operating system
- Available as OEM platform
Most performant host controller
The design of this packet pusher appliance consists of two major parts - the switching chip and the host controller. The host controller holds the web application, the CLI and the control function of the switching chip. The performance of the host controller is important in a higher layer, especially where support from classical CPU is needed.
The highlighting features include ARM multi-core CPU, 64 GB Memory and M.2 SSD/HDD up to 2TB. We can capture traffic and store it on the SSD.
Linux Ubuntu OS support
The unit runs on Linux Ubuntu OS and the benefit of this is that many applications like capture tools, analysing tools, traffic generators, and much more can run from the scratch.
The G5 platform supports user-definable match keys and actions. This provides a major advantage for future expansion. In case of a request for a new feature, P4 allows us to add new features within weeks which might not be possible with ASIC. For example, removal of up to 4 MPLS tags in one action is not supported in any ASIC switch.
24 MB Real time Packet Buffer
The platform has the biggest memory in the industry to avoid oversubscription. The shared memory is twice as big as that of competitors' design. This feature gives the advantage of avoiding oversubscription issues due to bursting traffic.
GTP inner IP hashing in-line rate
The Cubro G5 products are the only products on the market which can conduct GTP inner IP hashing and load balancing on silicon level. The highlighting feature of this solution is that it can carry out the applications in-line rate of multiple 100 Gbit speed.
On unit capture
The SSD disc on the host controller gives the option to capture traffic inside the unit. This is a very advanced feature for troubleshooting.
Advanced Micro Burst detection
To find the cause of lost packets through oversubscription.
General Feature Set:
Max L2 capacity
Max routing capacity
Max L3 Host ARP table
Dynamic Large Tables
Protocol Independent Switch (PIPS)
GENEVE, NFV Header
Routing in-out tunnels, any to any
ECMP with 1K membership
Central shared packet memory
NFV Service Chaining