When it comes to information security, most would agree that guessing is no substitute for knowing. While hope is not a strategy, many organizations employ this approach and don’t run a proof of concept when selecting a new security technology. However, there is a better way to address the need for this aspect of the selection process: Independent third-party testing.
At Juniper Networks, we have supported testing for decades and continue to champion the value of third-party security testing. Organizations such as ICSA, the formerly known NSS Labs, the industry initiative NetSecOPEN and CyberRatings represent a handful of those who operate with these principles.
Third-Party Testing Helps Eliminate Doubt
Testing helps inform decisions on what technology is best for an organization and how vendors can further improve their products and solutions. It’s a complicated, often resource-intensive endeavor that requires organizations to take shortcuts or forgo comprehensive solutions testing due to time or resource constraints. There is no substitute for testing a product’s efficacy to achieve a level of confidence that it delivers on its claims.
However, very few organizations have robust testing programs as it takes time and energy and with so many competing priorities, it is often hard to justify a proof of concept. Given this challenge of running a proof-of-concept (PoC) in-house, there are alternatives. For example, an organization might have a services vendor run it for them. Other times, an organization may opt for one of the competing vendors in a bake-off to run the PoC (in spite of the conflict of interest.) No matter the decision, it’s important to reduce or eliminate biases as much as possible. An elimination of bias and providing defensible support of the data is the primary purpose of a test methodology: to match test cases to production deployment requirements and ensure that results can be replicated. Though this may sound simple, the reality of developing a methodology and test harness (or the environment, tooling and threats as well as execution tricks) is an enormous amount of work.
Use Case Diligence Is Critical
Why is security testing so difficult? Effectively testing a security technology solution requires domain expertise, sufficient experience and time to set up test cases representing how the product ought to behave once deployed into an organization’s production environment. There is a considerable difference in testing for an individual feature and testing a complex stack of capabilities meant to function as a system. For example, testing an Intrusion Prevention System is simpler than testing a Next-Generation Firewall (NGFW) due to the added functionality of a NGFW being both a router and potentially IPsec termination point on top of the threat inspection and mitigation capabilities.
A test cannot merely consider the threats, their delivery vector and the actor’s techniques. The test must also include service expectations and the production environment’s impact as the security device or solution will be configured and deployed. The scoring methodology is also critical. Measuring whether a threat was detected and blocked is simple, but at what point in a flow is a service experience impacted in this detect-and-block scenario? This is just one example that illustrates a few of the considerations that go into a test methodology and serve to support robust test results that can withstand scrutiny.
Testing organizations that transparently collaborate and share their methodologies, motives and intentions and work closely with customers and vendors alike to build effective tests that represent the real needs of enterprises help further the industry. Organizations that do so produce valuable tests and data that customers can rely on to support decisions and select the best technology for their needs. The collaboration among the community with testing groups drives methodology development and improvements. This collaboration builds trust throughout the community: Trust in the test, in the results, in the vendors and in the solution decision.
Test Data is Informative, Not Malicious
Of course, not every test result is a glowing report of product excellence. In some cases, the results may appear appear critical. However, in cases like these it is important to evaluate the data on its own merit – this is where the methodology is crucial. If the use case was valid, the test was executed fairly and in a manner that represents how a product will be used, providing valuable behavior and mitigation data into how a vendor can improve its product. Some vendors rise to the occasion and embrace the feedback from testing, which can sometimes be a case of “tough love”. Other vendors may choose to retreat from it in the hopes that by not testing their products, product efficacy gaps won’t be revealed. The risk, however, is when those gaps are discovered by threat actors who then use them for their own malicious intent. Additionally, a vendor’s reaction to unfavorable test data can often tell a great deal about their culture. The mission of many testing organizations is to provide data and guidance to organizations trying to protect themselves. This should be the ultimate intent of every security vendor.
Simply put, objective, third-party testing plays a critical role in helping inform customer decisions. When conducted transparently, security testing provides fact-based results, ultimately leading organizations to make the right product and solution decisions for their business.