Whether at work or our personal lives, we want to purchase a product that performs as advertised and gets the job done. Fortunately for consumers, there are excellent resources that test and provide fairly accurate buying advice based on how a product performs. This is especially important in the enterprise security market, however, accurate assessments and truthful data are rare.
Third-party testing can help consumers and organizations make informed decisions when buying security products, but not all third-party tests are created equal. Knowing the difference between quality testing and one that just checks the boxes can lead to either a great decision or a grim one.
Transparency is the key to trust
Transparency is the key to evaluating any test data, and easily locating and understanding the data elements helps to build trust in the data, the report, the testing organization and ultimately the product or technology tested. These are fundamental concepts that I championed and implemented in my past life – running one of the largest and most well-respected security testing groups in the industry – to make sure all results were defensible, repeatable and stood up against the most instense scrutiny.
I recommend looking for the following in a test; if any are absent, it could be an oversight. If you cannot locate these, the test data should be highly scrutinized or discarded entirely.
Reference Test Content:
- Test methodology
- Clear, relevant use cases
- Test environment variables or assumptions
- Test cases that indicate precedence (and measurement + relevance)
- Threats and tools used
- A feedback method
Let’s break these down a bit.
Test methodology
The methodology is where the bulk of reference details are often located. This may be part of the test report. However, it is usually a separate document that details exactly how the testing environment was constructed and what it entails (i.e., topology, clients, software versions and other assumptions). This is the master document within which many other data groups and test content are located.
Use cases
Every test approach often has one or more use cases that are considered in both the design of the test and test harness (e.g., the tools used to run the test itself, what are the typical apps on a client or a server, what are the traffic mixes or what is the volume of traffic, etc.). This is particularly key as the test may be entirely valid but wholly inapplicable to specific needs. Therefore, the test data would be considerably less relevant for purchasing criteria.
Environment variables or assumptions
While it is not feasible to capture all the potential variables in a test harness or when a test is run, at least a portion of them significantly impact how the testing will occur and how a product or device under test will behave. Again, in the interest of transparency, knowing how the environment is set up helps inform whether the test results apply to a use case or scenario.
Test cases
There are always dependencies or precedence tests that should occur in a test. For example, performance behaviors should be baselined with unencrypted traffic before introducing process intense TLS 1.3 encryption, or whether a device correctly recognizes base application traffic before specific threats to that application are introduced. This helps a reader to understand where an issue may have occurred and, in turn, determine whether that test issue does or does not apply to a use case and needs. Without context for test cases and clear precedence, relationships established in the data lose a lot of value as it introduces potential subjectivity into how the tester interpreted the results.
Threats and tools used
After all, this is a security point of view, so what good is a security appliance or technology that cannot effectively detect and mitigate or manage threats according to its role? Key to this is threat relevancy, recency and efficacy. Threats against applications not deployed in an environment are of little value. Threats against applications that have been retired for many years are also relatively worthless. It is vital to have relevant threats (note I did not say “new”, as tactics include using old exploits that get aged out of signature tables) that are representative examples of current threats effectively exploiting a target. These should be documented to help evaluate the relevance to the deployment of the technology being considered.
Getting in touch – some feedback options
The last item is notable as there may be questions around the test approach, some detail or a possible issue with the tools used in the test. Having a method to reach out to the testing organization or the vendor to get clarity and, in the event of an error, a correction or statement, reveals the true mettle of the test organization. Testing is complicated, errors occur occasionally and if an error is made on the part of the testing house, owning, addressing and then communicating the resolution or findings is critical. A /null email or no contact method is a sure sign it’s a report without substance.
In the security world, there should always be a healthy balance of paranoia regarding potential weaknesses in the attack surface or attacker efficacy relative to current technologies. Leveraging reliable, relevant test data can save an incredible amount of time and frustration related to product selection choices. Great testing also helps vendors improve their products and solutions as it may reveal potential areas of weakness that should be addressed. Practitioners have a hard enough time just keeping tabs on the bad guys without worrying about claims based on bad or purposefully manipulated test data.
You cannot trust every report
Not all third-party tests are created equal and, unfortunately, some testing firms will write whatever a vendor asks them to and will create a test that suits only one vendor’s solution. If there’s enough money, it can buy whatever you want in a report. These “tests” are marketing tools that do not accurately reflect the real-world use cases or roles that these technologies play in protecting an organization. Before using a test report to support a product selection decision, it is important to understand its validity.
Bottom line
Accurate testing and test results can save an enormous amount of time and resources for an organization. After all, proof of concept (POC) exercises are costly in terms of both time and resources. Even if a valid test only addresses a portion of use cases, it can still potentially shorten a POC from months to weeks or even days. However, it’s imperative to discern between reports that have value and reports that are not worth the digital ink. After all, you don’t want to have to guess whether you’re protected or not, do you?