Just as networking technology is evolving, so too are the architectures that connect and support applications and services. In today’s IT world, there is no enterprise-wide infrastructure. Rather, there are individual networks—data center, campus, branch, public cloud and WAN—each with their own teams, budgets, priorities and tools. And while these networks are also likely to have their own micro drivers for change, ultimately the evolution of network architectures will be somewhat co-dependent as the industry converges isolated devices into coherently-managed resource pools.
The macro driver
Individual refresh and expansion cycles will provide these networks with opportunities for incremental improvement, allowing enterprises to take advantage of improved economics in networking equipment. Incremental improvements, however, do not define a macro driver.
The broad context in which all of this will happen is cloud and multicloud. As enterprises adopt cloud and, eventually, multicloud options, to service their application workloads, the concept of network operations will be redefined.
Indeed, the promise of multicloud will be unfulfilled if there is not operational change. The end-game is a pool of distributed resources, all managed as a collective with coherent end-to-end control and security. This requires the coordinated evolution of all the places in network.
While individual in-the-moment priorities will dictate how this evolution unfolds, the ultimate destination requires that the disparate networks in enterprise IT grow closer. This has implications on how architectures are likely to evolve.
Less is more
For years, architectures have been collapsing. Three tiers have become two tiers in the data center. Multiple boxes are becoming a single box with virtual functions at the branch gateway as uCPE and SD-WAN emerge. Wherever possible, network architectures will look to collapse complexity into a simpler architecture with fewer moving parts, leveraging software to unlock capability.
Additionally, it seems obvious that the go-forward architectures in every place in the network will revolve around a smaller set of standard building blocks. Proprietary and niche protocols will be retired in favor of open alternatives. The number of technologies in production will drop as simplification overtakes customization as a primary design practice.
This is already happening in the data center, where BGP EVPN is the de facto standard for deploying IP fabrics. Campus architectures will follow a similar route, likely adopting BGP EVPN, as well. This would allow enterprises to converge not just within a specific place in network, but also across all places in the network, standardizing on tools, processes and people.
Broadly open
If there are going to be fewer technologies leveraged architecturally, those technologies will have to be based on broadly adopted open standards.
Open is not new and debating the value of standards-based approaches is unnecessary. But open will not be enough. The standards that emerge will need to be broadly adopted. Networking teams have no economic leverage when a technology is narrowly available. Clearly, the cost of networking will have to drop, especially as data explodes and applications get distributed. This means that emerging standards will only be transformative if they are adopted by a significant cross-section of suppliers. This would seem to favor not only protocols like BGP EVPN, but also management standards like NETCONF, gRPC, and OpenConfig.
Centralized and distributed
The cloud movement is largely perceived as a centralization of application resources. While this is generally true, it’s worth noting that this is largely a logical phenomenon. That is to say, resources are logically centralized, but physically distributed.
At the core of cloud and multicloud is a simple question: do economics or physics dictate the design?
For some applications, the answer is economics, in which case a centralized set of resources that benefit from economies of scale makes sense. For other applications, performance is what matters. IoT is a good example; if performance matters, the workload will likely be moved to the edge. This will drive the proliferation of edge cloud and multi-access edge computing (MEC), where compute and storage are pushed to white box devices at the network edge. For remote connectivity, this will likely lead to remote gateways that resemble uCPEs with limited compute and storage resources co-resident on the device.
Another dynamic that will drive a mix of centralized and distributed architectures is data. Specifically, is it better to move the data to the application, or the application to the data?
Traditionally, the answer has always been to move the data. But where there is a large amount of data or the WAN connection is small (satellite connections to a remote drilling platform, for example), moving the application is a more feasible option. This means that, at least for some applications, the cloud will exist on the edge.
Abstracted control
The general premise of software-defined networking, which has been in full force for years, is that interactions should be at the network level and not operating device-by-device. This is a form of abstracted control, and in scale-out architectures, this is the only reasonable way to operate.
Abstraction will continue to proliferate through networks, forcing teams to operate at higher levels. Intent-based networking represents the obvious next step in abstraction, elevating operators above the device CLI. In many ways, SDN requires intent-based networking; if controller-based management exists over a heterogeneous network, there is no alternative to leveraging the controller as an API broker that translates centrally specified service descriptions into device-specific instructions. The fact that this concept now has a name is encouraging, even though the idea was always going to be the natural conclusion of a software-defined world.
Abstraction, however, is not limited to physical networks. The same management principles that led to our current state will persist in the cloud arena. Control will have to be abstracted above the next-generation devices (clouds). Relying on cloud-specific primitives when the eventual outcome is abstracted control seems a risky approach at best. Multicloud management will be the norm. This is true even where only a single cloud is leveraged, if only to provide negotiating leverage over the cloud supplier.
Operations first
All these changes portend a shift from moving packets to managing infrastructure as the primary concern for IT teams. Infrastructure ought to be largely invisible and utility-like, only noticeable when something goes wrong. If your network is siloed and complex, any services will be—by necessity—a fragile patchwork of domain-specific bits stitched together, making it difficult to operate and virtually impossible to observe. The future must be different—operations will need to be seamless.
Architecturally, this means that things like analytics and streaming telemetry will be critical components, not just for a specific place in the network but across the end-to-end network. Multicloud operations will require a common platform that includes analytics, telemetry, orchestration and management. This common platform will look like orchestration and control early on but will ultimately evolve into a services broker with access to relevant service catalogues.
In many ways, the evolution of network architectures will ultimately support an operational framework where teams move forward. This platform will be:
- Human-driven—The manual execution of workflows, typically defined in a document, and executed via the CLI.
- Workflow-driven—The manual execution of workflows, defined in lightweight scripts and playbooks, and executed via automated tools.
- Event-driven—The automatic execution of workflows, defined as code and playbooks, and executed by an event engine.
- Machine-driven—Dynamic operations, including a continuous integration and delivery pipeline, leveraging machine learning.
- Self-driving—Autonomous continuous response in the infrastructure, guided by humans but executed by systems.
In fact, my colleague, Kireeti Kompella, recently participated in a lively discussion on self-driving networks with industry experts at Dell and LinkedIn. Head over to the Get Smart video series on SDxCentral to follow the conversation.
Indeed, the architectures now on the horizon will be as much about making networking better as they are about building better networks. Put differently, operating a network will become far easier thanks to automation within the infrastructure and operations, allowing operators and architects alike to spend more time on the business of doing business.
Of course, this will ultimately mean that network operators themselves will need to understand the broader context in which they serve. If this does not lead to an IT team that can easily and rapidly develop, improve, supply and operate all the services that enterprise users and applications require, then the transition will have been for naught.