“I can’t show this to my customers! I need a clean report!”
At Include Security, we put a lot of care into our penetration test reports. But over the years, we’ve noticed that our reports are sometimes interpreted in ways we did not intend. This is understandable. Different people, with different backgrounds, goals, and incentives, will naturally read the same document differently. That is the nature of communication. Still, we think it is worth clarifying some of our intentions and addressing some common misinterpretations. In this post, we’ll walk through the most common misconceptions we encounter and explain our perspectives as an expert pentesting team.
As we acknowledged above, interpretations of a report will depend on the reader. When we deliver a report, we have four primary audiences in mind:
Our client. First and foremost, we are hired to help improve the security of a client’s technology. The report documents what we tested, what we found, and what we understand about the security posture of the system. The goal is to help our client make informed decisions about the security of their systems and applications.
Our client’s customers. Many organizations purchasing products and services require evidence of third-party security assessments from their vendors. We take that responsibility of independent review seriously. When a customer reviews one of our reports, we want them to know that it was written with integrity and technical rigor.
Auditors. Although we are not ourselves auditors, our penetration test reports are often used during compliance reviews or audits to demonstrate that testing has been performed. In these cases, our reports must clearly describe the scope, methodology, findings, and remediation status. Auditors must determine from this content whether compliance requirements have been met.
Ourselves. Many clients conduct periodic assessments of the same systems. While we take extensive internal notes, past reports are a key input to future assessments. They serve as part of our institutional memory, so they need to be thorough, accurate, and clear.
After considering many report readout meetings, and post-delivery conversations with our clients, we’ve identified three misinterpretations requiring the most additional communication to find alignment on.
On many occasions, we’ve received alarmed responses from clients about findings in the report. The clients expressed a concern like “I need to show this report to my customers, and if they see we have any vulnerabilities, they won’t want to do business with us.” We completely understand why a customer would want to avoid purchasing software with a poor record of security. However, the information in the report needs to be considered in the proper context. It is a snapshot in time. Vulnerabilities may have been recently added to the test environment during the latest feature development, and they may be resolved before being exposed to the world.
We have tested code from startups as well as established tech giants. We’ve examined code built with a wide range of programming languages and frameworks. Nobody is writing code that does anything interesting without occasionally introducing some security vulnerabilities.
The presence of vulnerabilities in a penetration test report does not necessarily represent any deficiency of the developers nor their software development process. Just as great writers benefit from editors, great engineers benefit from outside testers. A report with findings does not mean the team failed. It means security experts looked closely and found areas that could be improved.
By the time you’re reading this, this blog post will have been through several revisions and incorporated feedback from multiple readers/editors. It is considerably less complex than most software projects, and yet it still didn’t get everything right in the initial draft (and probably still didn’t in this published version either!).
Some application tests result in a report with few or no vulnerabilities identified because the applications have been hardened over time and the core code has been subjected to repeated testing. With limited code changes between tests, the number and severity of vulnerabilities declines. This is great.
However, many application tests reveal few findings for less comforting reasons:
A finding in the report is not a demand for remediation. Penetration testing identifies technical risks. Whether or not to remediate those risks is a business decision. Penetration testers do not know the client’s budget, roadmap, risk tolerance, or the business value of each application or function. It is completely reasonable for a business to accept some risks and elect to not remediate certain vulnerabilities. That decision does not invalidate the finding, and it does not mean the finding is a false positive. It simply reflects that the cost to fix a vulnerability can be greater than the business benefit of remediation. In this case, we encourage our clients to document their reasoning for risk acceptance. We include their explanation in our remediation report so that interested parties can understand the full context.
We understand the appeal of a report with no findings. It feels like a win. But we believe there are better indicators of a strong security posture:
Regular testing. One report is just a snapshot. Security is an ongoing process. Integrate secure development practices, code reviews, and internal QA into your software development lifecycle. Bring in third-party testers regularly to catch what might be missed internally.
Good remediation reports. The contents of the initial report are only half the story. Confirmation that the identified vulnerabilities have been fixed is evidence that a client’s assurance process is achieving its aim of improving the application’s security.
Reports without caveats. A zero-finding report from a short, constrained, black-box test tells you less than a thorough test that uncovered real vulnerabilities and explained them clearly.
Reports from skilled, reputable testers. Testing is only as good as the people doing it. A short report might reflect a secure system, or it might reflect weak testing. A strong report demonstrates expertise by explaining how the system works and why certain classes of vulnerabilities were or were not present.
Penetration test reports are tools for improving security. When read in the right context, even reports full of findings can be signs of a mature, proactive development culture. The goal isn’t a perfect report, it’s a stronger, more resilient system.