Benefits of Micro-segmentation
Enumerating the benefits of micro-segmentation is like borrowing pages from Sun Tzu’s “The Art Of War.” It shifts focus, at least from the security perspective, from reacting to the enemy to focusing on your own strengths. It is a strategy from which victory evolves without having to fight the enemy. You ensure the safety of your defense by holding positions that cannot be attacked.
What are one’s own strengths? An example of a brilliant analogy can be found in this article about 3 Networking Innovations Businesses Desperately Need: “You know your house better than an attacker.” You understand how it is segmented better and you reduce the attack surface and the blast radius. Further down, I will describe an approach to implementing this strategy. (Hint: knowing your house is like knowing your intent). Let’s start with describing the challenges.
“Knowing Your House” Challenge
If you are relying on knowing your house to gain advantage, you better know your house well. Using the “whitelist” model you can explicitly define what communication patterns are allowed and you may lose the battle by making mistakes at this stage. This is especially daunting when your house is complex. If we switch from an analogy model to the technical problem domain, the question is how do you determine which workload endpoints you have so that you can define and “hold the positions that cannot be attacked?”
One way to do this is to perform discovery of applications and their behaviors and then build model representations based on these behaviors. This is a tremendous help in dealing with application level “brownfield.” There are “whitelist” solutions on the market, but these lack the discovery of “brown field” applications which has proven to be a big pain point. The fundamental issue is that the discovery frequently relies on Machine Learning (ML) or Artificial Intelligence (AI) techniques (which are 80-90% accurate) to map what is on the wire to the actual workload communication patterns is a non-deterministic, error prone, and expensive process.
This is in dramatic contrast with the “five-nines” requirement for network uptime. To make issues worse, some of the “discovered” workloads may be malicious ones, so you need to deal with that as well. Or maybe you are the lucky one (or should I say proactive one) and you have all these workloads known. Whether the definition comes from a spreadsheet or from the discovery you need to represent this knowledge with a data model and queryable data store. Knowing your house is knowing the intent. This intent must be stored in a single source of truth, as described in Intent-Based Networking Taxonomy. Level 1 Intent-Based Networking has this single source of truth.
Composing Reachability and Security
Segmentation, (micro-segmentation’s cousin with less granularity) has been embedded in some standard networking constructs such as routes, VLANs and VRFs for some time. While these allowed for segmentation, they were (and still are) constructs whose primary purpose is enabling “reachability.” The primary goal of routing protocols is to exchange reachability information and do this at scale (the reason for mentioning scale will become obvious in a moment). You can implement segmentation at a reachability level by partitioning the reachability resources (route filtering, VLANs, VRFs). Micro-segmentation requires higher granularity in order to help with security.
More importantly, reachability is a prerequisite or dependency for micro-segmentation. As such micro-segmentation requires two functions, reachability and security, to be composed coherently. To enable security requirements “endpoint A must be able to talk to only endpoint B and only on the port 80” will these endpoints be reachable in the first place. You can place Access Control List (ACL) rules on your servers all day long, but if there is no reachability between them, those ACLs will be no-ops.
Interactions between security and reachability can — and usually do — get more interesting. You may decide to segment reachability domains so that you can re-use IP addressing resources for resource optimization or workload portability purposes and within these reachability domains you may decide to apply micro-segmentation policies. The phrase “within these reachability domains” implies knowledge that a given endpoint is a member of a particular reachability domain — and is subject to micro-segmentation policy.
Say for example, you want to have your workloads span the public and private cloud. “Span” here requires managing security and reachability in an integrated fashion. One option is to lay down a pipe between the clouds and leak the routes, but you must also punch holes through your whitelist policies. Again, integrated reachability and security is at the heart of the challenge.
Some of the most widely used Software-Defined Network (SDN) solutions have the interesting property that reachability and security are dealt with using the same mechanism. This is because the high granularity of explicit reachability information (mapping of overlay addressing space to an underlay) inherently allows for micro-segmentation which solves the composition challenge.
This comes at a cost, though. Routing protocols have evolved (and hardened in the process) over the last few decades and a lot of effort has been put into solving reachability at scale where SDN has had challenges. Decoupling reachability and security at the implementation level is critical for these two functions to evolve independently and at their own speeds. This does not mean that endpoint intent needs to be decoupled as well, though. On the contrary, you may want your intent specification to be as simple as possible and make reachability and security properties of the endpoints, yet decoupled from how that intent is implemented.
You should have the choice between implementing a pure SDN solution that fits your scale requirements or a decoupled implementation that offers you more choice and scale down the road. Simplicity is usually used as a valid argument for SDN overlays. Intent-Based Networking delivers the same simplicity without being bound to specific implementation, thus offering choice and agility.
This challenge points to the fundamental requirement for an Intent-Based Networking system to serve as a single source of truth across different functions which in this example is reachability and security. When different functions live in different tools or data stores the operator must stitch them together which increases the danger of making a mistake, introducing a vulnerability, and losing the battle.
Reducing (or controlling) the attack surface is a key element of a secure solution that is primarily achieved by controlling reachability. The two need to be tightly coordinated. Interactions with other functions, such as Load Balancing (LB), Quality of Service (QoS), or High Availability (HA) whose logical place is in the single source of truth are left as an exercise for the reader or may even be the subject of a follow-up article (let me know your thoughts in the comments).
In my next blog post, I’ll delve into intent and enforcement and what that means for reducing the operational complexities of micro-segmentation policy.