Get details on the new White House ONCD report, how to address it, and how Legit can help.
The White House Office of the National Cyber Director (ONCD) released a report late last month titled, “Back to the Building Blocks: A Path Toward Secure and Measurable Software.”
The report highlights two areas that would improve software security and calls on the technical community to take steps to address them. One is the use of memory-safe programming languages, the other is the development of diagnostics to measure cybersecurity quality.
Memory-safe programming languages: The report specifically asks software manufacturers to move away from legacy programming languages such as C and C++ because of the memory safety risks they introduce, which have largely been solved for in newer programming languages.
Measuring cybersecurity quality: The report also introduces the idea that we, as a community, need to have better ways to measure security quality within software, primarily in three distinct areas 1) the developer process, 2) software analysis and testing, and 3) the execution (runtime) environment.
There is a very real reason that C & C++ languages still have a lot of adoption despite having very well-known security issues; they’re faster, they offer substantially more granularity in a lot of cases (think memory allocation, which is also the same thing that causes the risk they are trying to eliminate), and serve a very wide array of purposes that are hard to mimic (it’s literally used for everything — hardware, kernels, drivers, operating systems, etc.).
This metric recommendation is a challenging one, primarily because of the complexity around current application development and all its moving pieces, but also because often the visibility into and understanding of how each area of the application or build environment are correlated is missing, which leads to a lack of context around the issues.
Without that context, measurability is generic and not very helpful. Consider CVSS scores and how they are used today despite the fact that, for the most part, a CVSS means nothing in the context of how a vulnerability maps to actual risk within an application or development environment.
Ultimately, I agree with the Biden Administration that the ability to measure software security is lacking and needed. I think the main reason we don’t have this today and will struggle to have it in the future is because of the siloed nature of application/cloud security.
Consider that we have cloud security teams and products, application security teams and products, developer teams and products, and build tool teams and products, all of which assess and measure different areas, in different ways, with different metrics and outcomes.
Until we start looking at this as a complete product security issue and realize that it can’t be measured separately or distinctly due to the interconnectedness of all these areas, any measurement system created will still be flawed and missing very critical areas of context that are needed to understand actual risk vs. perceived risk.
I think this is less a technical gap issue and more about changing the paradigm to start looking at and addressing application risk holistically from end to end, including the application code, the software factory the code is built in, and the execution environment that it’s running within.
First, where possible, start using only modern programming languages that are memory safe to help eliminate large subsections of known risks found in older programming languages such as out of bound read/writes or use after free vulns. In cases where products are already built in older languages with known memory risk, there are a few things you can do to make those more secure. First, make sure you’re using modern idioms to help produce safer and more reliable code. Use security tooling such as fuzzers and sanitizers to find issues before they go into production. Lastly, use appropriate privilege separation where possible to minimize blast radius should a vuln in your app get exploited.
Regarding metrics, it all comes down to visibility and correlation. If all the information you have about your applications is in siloes, it will be very difficult to create metrics that are meaningful in any way. An application security posture management tool can help you consolidate and correlate the information coming from all your different application security tooling, development environments, cloud environments, etc.
Additionally, understanding what risk metrics are relevant to your organization and your risk appetite will help you build universal policies, which will make it easier to build metrics that can show progress as you mature your application security program.
Final recommendation would be to use metrics that show relevant and actual risk reduction. This requires an understanding of how applications, runtime environments, mitigations, etc. are connected and correlated, which comes from having a holistic view of your software factory and code produced within it. Looking at a reduction in vulns or mean time to remediate doesn’t show if you are actually reducing risk. Did we just close out a bunch of low vulns that don’t matter? Did we spend time on a critical vuln that was already mitigated elsewhere or wasn’t relevant to our risk profile? Visibility is key to every aspect of security, and creating meaningful metrics that help drive your application security program shouldn’t be any different.
Legit has been moving the needle on this software quality metric for some time, and can help in a few areas mentioned in this report, specifically:
Learn more about our solution.
*** This is a Security Bloggers Network syndicated blog from Legit Security Blog authored by Joe Nicastro. Read the original post at: https://www.legitsecurity.com/blog/understanding-the-white-house-report-on-secure-and-measurable-software