As networking moves from scale-up devices to scale-out networks, the forwarding landscape is converging on a fairly narrow set of silicon options: custom in the core and merchant everywhere else. With cloud and multicloud poised to drive constantly shifting network architectures, it’s a certainty that the industry will see a broad diversification of forwarding architectures in the coming years—a development that will have significant implications on both the economics and physics of networking in the multicloud era.
Economic Leverage
While the data center is not the only place where network silicon matters, it certainly dominates most conversations about networking technology. This is due primarily to the large number of devices being deployed in public and private data centers globally.
Despite the broad interest in the data center space, from a silicon perspective, it resembles more of a captive market. The vast majority of silicon deployed in switches today comes from Broadcom, so despite the industry’s move toward commodity, open and free choice, the center of all those white box dreams is really a black box switching ASIC.
In a captive market, economics are dictated primarily by suppliers, a dynamic that runs counter to industry trends. As the economics of networking come under more scrutiny—especially in large- and web-scale data centers—there will be pressure to diversify the forwarding architectures that power leaf-spine deployments around the world.
That diversification will require a layer of abstraction between the software (which drives behavior) and the chips (which forward the packets). This work is well underway, including efforts with the P4 open source project to provide a declarative language for interacting with forwarding planes. As P4 matures, companies will have a cost-effective way to port their network stack to different types of silicon, effectively providing economic leverage to a place in the network that has traditionally lacked sufficient competition.
Cloud and Data Center Edge
Of course, change will not be limited to just the data center. As private and public clouds continue to grow, demand on the cloud WAN and data center edge will continue to grow.
The primary objective in meeting these needs will take two different forms. Those who view traffic growth primarily as a scaling challenge will look to 400GbE to provide higher capacity, while those who view this growth primarily as an affordability challenge will look to 400GbE as a means of improving the cost per bit of WAN transport (using 100GbE breakouts on 400GbE silicon). In both cases, the market will see the proliferation of 400GbE silicon, designed explicitly to provide scale and minimize cost.
Since 400GbE adoption will be somewhat measured at the outset, we will not see the volumes that typically accompany merchant silicon, at least not initially. This will create a need for cost-effective custom silicon, especially in larger systems designed explicitly for high-scale deployments.
Ultimately, this means that the future of forwarding will include both merchant and custom silicon.
NFV, Edge Cloud and Multi-Access Edge Computing (MEC)
Ignored for years, remote site connectivity has experienced a bit of a resurgence recently. Led initially by the rise of the uCPE and bolstered by the SD-WAN movement, the branch (in all its flavors) is driving its own set of changes in forwarding architectures.
As multi-box connectivity solutions collapse on white box hardware powered by general-purpose CPUs, the whole network functions virtualization (NFV) initiative has started to make more sense. Running on an Intel x86 platform, for instance, enterprises can run a branch gateway that hosts a sufficient number of VNFs to provide security and application services. The forwarding architectures must naturally expand to include x86 forwarding paths; within the x86 architecture, additional diversification will take place as some optimize for VNF support (more cores on a Broadwell, for example) while others optimize for cost (leveraging an Atom processor, perhaps).
This model will naturally lead to yet more diversification. Platforms like POWER and ARM, for example, will provide greater economic leverage in much the same way that broader merchant silicon options support the data center. Meanwhile, more price-sensitive branch deployments (smaller branches, kiosks, ATMs and so on) will require low-cost, low-power options, which likely favors more ARM-based platforms.
Finally, as edge cloud and MEC become more prominent (for IoT type applications, for example), there will be a need to combine micro servers with networking platforms. This will require things like containerized connectivity to handle routing between applications running in on-box containers or edge cloud instances.
Regardless of the use case, it is clear that network forwarding architectures will employ a diverse set of CPU-based architectures.
Making Diversification Economically Viable
While the discussion so far has focused primarily on the supporting forwarding paths, it is important to understand that these paths must be integrated into the software data planes that serve as the foundation of the network operating systems.
If every forwarding architecture needs a unique network OS, the entire model breaks down. Not only will none of the economic drivers materialize if suppliers have to rearchitect their offerings for every possible device permutation, but the testing and integration burden it places on operators is prohibitively expensive. It simply doesn’t work in an even moderately sophisticated environment.
While the forwarding landscape might diversify, the software that sits on top must be abstracted enough to allow a mix-and-match of control and forwarding planes to support fit-for-purpose designs. This places additional burdens on the network OS; if it is not abstracted—and if common abstraction layers like P4 are not integrated—none of the benefits the market is seeking will be realized.
Pursuing Perfection
Juniper Networks has been pursuing all of these objectives through its adoption of merchant silicon in the data center, its development of custom silicon to drive the 400GbE transition and the x86-based NFX Series Network Services Platform that serves as both a uCPE and an SD-WAN gateway. Despite the diverse forwarding options we support today, we do it all with Junos as the network operating system. By investing early in things like hardware abstraction and by continuing to push the envelope with industry-leading integrations with P4, Juniper is prepared to meet the needs of a diversified future.
Head over to SDxCentral for a more in-depth conversation on the future of forwarding between industry experts at AvidThink, Google Cloud and Juniper Networks.