Developers behaving badly: Why holistic AppSec is key
2023-12-7 20:30:0 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

devs-behaving-badly-holistic-app-sec

A recent survey shows that untested software releases, rampant pushing of unvetted and uncontrolled AI-derived code, and bad developer security are all culminating to seriously expand security risks across software development. Add in the explosion of low-code/no-code development and economic headwinds that are pressuring developers to deliver features with less support, and the AppSec world is in for a perfect storm in 2024.

While the buzz around shift left security still gets a lot of play among DevSecOps advocates today — and for good reasons — the mantra of “test early and test often” can only get an application security or product security team so far in moving the needle on software risk. 

Comprehensive AppSec is so much more than squashing bugs early in the development lifecycle. Mature organizations recognize they need to mature their AppSec approach to keep pace with modern development and release practices. Here’s why a more holistic AppSec approach is key. 

[ See related: How legacy AppSec is holding back Secure by Design | See Webinar: Secure by Design: Why Trust Matters for Software Risk Management ]

Curbing bad developer security behavior

The a recent survey conducted among 500 developers worldwide by SauceLabs illuminated a whole lot about “Developers Behaving Badly.” One of the key themes to bubble up from the report had absolutely nothing to do with when or how testing is conducted. It has to do with the security hygiene practiced by developers daily.

The fact is, it’s not so great. About three-quarters of developers admit to circumventing security measures by doing things like disabling multi-factor authentication (MFA), or doing an end-around on VPN to speed up their work. Similarly, 70% admit they’ve shared credentials — with 40% saying they do so regularly.

This report points to a huge need for security support in creating developer guardrails that are embedded in the CI/CD pipeline, so that developers can still move quickly but so so safely. That means putting in place well-architected identity and access management (IAM) functionality, as well as thoughtful permissions throughout the entire development workflow — but especially when it comes to touching the highest value assets.

Nir Valtman, founder of the software security firm Arnica, said minimizing the attack surface by reducing the permissions to source code, the place where the problem starts,  was key.

“If the company culture is to provide access to push code for all developers, then apply branch protection policies to require pull request-reviews by the right owners, and review the CI/CD permissions and triggers.”
Nir Valtman

A big part of this holistic approach to curbing bad operational security is visibility. Valtman said organizations should also be monitoring for abnormal behavior in development tooling and code repositories. Ideally, security should get buy-in with their approach.

“An abnormal behavior can be the result of an insider threat, account takeover or a malicious third-party library. Use an anomaly detection mechanism across your development ecosystem, but make sure the developers like the selected approach. Empower developers to own security in a simple and scalable way — let them pick the right security solution for them. 
—Nir Valtman

Shift everywhere with your testing

Security testing — and the remediation and refactoring that follows — is obviously a core part of every application security program. Unfortunately, in spite the best efforts of DevSecOps pundits and AppSec advocates today, a lot of the security tests mandated of developers today still remain out of phase with their CI/CD pipeline and manually conducted. When the Developers Behaving Badly survey asked developers, 67% said they could and did push code to production without conducting security testing, and nearly a third of them reported that they do it often or very often.

When the Developers Behaving Badly survey asked developers, 67% said they could and did push code to production without conducting security testing, and nearly a third of them reported that they do it often or very often.

The most visible goal of the shift-left movement is to build security gates into the pipeline as early in the development process as possible, and get those testing steps automated. But one thing to keep in mind is that early tests at the code and component level won’t catch every AppSec risk. Which is why many advocates say that the strategy should be focusing on shifting right or shifting everywhere in order to root out risks that can only be seen in the context of how software will be deployed, said Saša Zdjelar, chief trust officer at ReversingLabs and a longtime security practitioner.

“As you shift right, you lose componentry or unit level control, but you gain context, as people add more and more code. As first party code gets combined with third party commercial and open-source imports and includes, that container size grows and it becomes something closer and closer to a full-built product.”
Saša Zdjelar

As an organization consumes or produces software, introducing testing at the very end before pushing to prod makes it possible to check for malware that may have infiltrated the software supply chain, tampering, problems with digital signatures, and the inclusion of sensitive information or development secrets.

“Those are the characteristics of software that we believe should be checked at the very end.”
—Saša Zdjelar

Account for development risks from generative AI

Further complicating the testing issue is the addition of generative AI to the development cycle. Tools like GitHub Copilot and ChatGPT stand to greatly accelerate developer productivity, but utilizing code produced through GenAI adds more to the risk equation.

In a recent Security Table Podcast, longtime Sppsec veteran Jim Manico, founder of Manicode Security, explains the scenario succinctly.

“To be a developer and not to use AI is going to put you behind the eight-ball real fast. To use AI as a developer is necessary because if you don’t your productivity is going to be one-third to a fourth of your peers. But if you’re using AI without security review, you’re screwed in a bad way.”
Jim Manico

The Developers Behaving Badly report found that most developers are failing to do that review. Approximately 61% say they’ve used untested code generated by ChatGPT, and more than a quarter do it regularly.

Holistic AppSec programs are going to need the policies, developer education, tooling, and security guardrails necessary to meet these AI risks head-on, as it is inevitable that generative AI is embedded into developer processes given tools like GitHub Copilot.

Low-code/no-code: A call to action on guardrails

Speaking of inevitability, another huge one is the looming risks that are coming for organizations with regard to low-code/no-code development environments — both for professional developers and citizen developers. This is a looming issue that didn’t make it into the Developers Behaving Badly survey but which when combined with generative AI, is poised to cause the number of applications needing security scrutiny to mushroom.

Michael Bargury, founder of low-code/no-code security firm Zenity, and author of the OWASP Top 10 for Low-Code, said the situation was already getting out of control.

“How does application security look when you are taking all your business users under your umbrella and allowing them to push code? And we are seeing [generative AI] make this even more of an issue — we’re seeing thousands of applications being developed by AI in low-code/no-code environments and being directly deployed to production.”
Michael Bargury

Bargury said Zenity is working with many Fortune 100 companies who are grappling with how to create a holistic AppSec problem that includes the enormous body of apps produced in this way. He explained one engagement with a security team that’s been looking at applications built by generative AI across their entire organization — 500 AI-derived applications, “and that was before they realized they hadn’t accounted for low-code apps.”

Once the company wsa able to get a Software Bill of Materials (SBOM) on the low-code environment, they found they had about 7,000 applications that were built by low code with generative AI.

“The magnitude is enormous.”
—Michael Bargury
 

At the same time, there’s no stopping the tide of low-code/no-code. Just like with the rest of development environments, the modern AppSec team will need to start building automated guardrails and testing into low-code/no-code development in order to attain holistic AppSec.

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Ericka Chickowski. Read the original post at: https://www.reversinglabs.com/blog/developers-behaving-badly-why-holistic-appsec-is-key


文章来源: https://securityboulevard.com/2023/12/developers-behaving-badly-why-holistic-appsec-is-key/
如有侵权请联系:admin#unsafe.sh