one thing we kept noticing while testing security tools is that the problem isn’t just false positives by themselves
it’s what happens after teams have to deal with them over and over again
when a scanner keeps producing loads of findings, and a big chunk of them turn out not to matter, people start changing how they react
they trust the output less
they skim instead of investigate
they focus only on the obvious criticals
and everything else starts blending into background noise
that feels like the real damage
not just “this tool is noisy”
but “this tool is training people to stop caring”
we wrote a bit about this after running traditional SAST tools across 10 open source repos and seeing just how much noise came back vs how many findings were actually real:
https://kolega.dev/blog/the-87-problem-why-traditional-security-tools-generate-noise/
curious how other people see this
have security scanners made teams better at fixing issues where you’ve worked, or just more numb to vulnerability reports?