AI in Security: Efficiency or Blind Spots?
好的,我现在需要帮用户总结这篇文章的内容,控制在100个字以内。首先,我得通读整篇文章,理解主要观点。 文章讨论了AI在软件开发和安全团队中的应用,指出AI虽然提高了效率,但可能导致“知识负债”,即团队对系统的理解减少。特别是在安全领域,AI处理警报时隐藏了推理过程,导致分析师无法深入理解问题。Juno工具通过透明化AI的推理过程,帮助分析师在保持效率的同时提升对环境的理解。 接下来,我需要将这些要点浓缩成一句话,不超过100字。重点包括:AI带来的知识负债、安全团队的问题、Juno的作用和优势。 最后,组织语言,确保简洁明了。 </think> 文章探讨了AI在软件开发和安全领域的应用如何导致"知识负债",即开发者和分析师对系统理解的减少。安全团队使用Juno等工具通过透明化AI推理过程来解决这一问题,帮助分析师在保持效率的同时深入理解环境。 2026-3-31 03:10:52 Author: www.uptycs.com(查看原文) 阅读量:5 收藏

How security teams are using Juno to bring visibility back

AI is making software engineers dumber.

In a 2026 study, Anthropic found that developers using AI assistance scored 17 percentage points lower when tested on code they had just written. In another study, researchers found that programmers using generative AI completed more tasks but showed no improvement in their understanding of the codebase. And in a randomized trial by METR, experienced open-source developers believed AI made them 24% faster. When measured, they were actually 19% slower.

The common thread across these studies is that developers stopped building a mental model of the systems they were working on. The AI handled the reasoning and they accepted the output, without building a picture of how things actually fit together. Researchers are calling this epistemic debt: you're productive today, but you're borrowing against understanding you'll need tomorrow.

A similar dynamic is playing out in security.

Security teams are adopting AI faster than almost any other function, and for good reason. Alert volumes are unmanageable, experienced analysts are hard to find, and every major vendor now offers some form of AI-powered assistant.

But the underlying pattern is the same. The AI processes an alert, produces a disposition, and the analyst accepts it without ever investigating what actually happened. Over time, the team loses its understanding of the environment it's supposed to be protecting.

AI’s Visibility Problem

The main culprit here is hidden reasoning. You see the output, never the process. And you can't learn from a process you never see.

In most AI-assisted workflows, the model ingests an alert, correlates some data internally, and returns a verdict. What you don't see is which data sources it checked, what it ruled out, or what assumptions it made along the way. The analyst has no way to tell whether the AI did a thorough investigation or a shallow pattern match that happened to produce a plausible-sounding answer. And this matters more in security than almost anywhere else. 

The obvious solution is to make the AI's reasoning visible.

If an analyst can see how the AI reached its conclusion, they get the speed without losing the understanding. Instead of just receiving a disposition, they follow an investigation. And every investigation they follow builds their mental model of the environment.

How Security Teams Are Using Juno

Instead of receiving a verdict from a black box, security teams using Juno follow the full investigation as it happens. Every query, every data source, every hypothesis tested and ruled out. We at Uptycs, call this the Glass Box.

image1-Mar-30-2026-05-48-35-0223-AM

Consider three recent cases:

A security team gets a GuardDuty alert flagging unauthorized access on an IAM user. With Juno, they don't get a one-line disposition. They watch Juno run eight SQL queries across four data sources in 90 seconds. It checks CloudTrail, IAM activity, compares the flagged IP against peers on the same role, pulls the hourly timeline. By the end, the analyst knows the IP was on the threat list by mistake and its behavior matched normal automated agent traffic.

When an AWS cost spike needs explaining, the team watches Juno test multiple hypotheses in parallel and can trace every step of the reasoning. But Juno also surfaces exposures in the expanded infrastructure that nobody asked about. The team comes out of that investigation knowing more about their environment than they did going in.

And when two teams in different industries face the same technical finding, they see Juno build completely different investigations based on their business context. A fintech team sees findings framed around SOC 2 controls. A software company sees the same finding framed around supply chain risk. The reasoning is visible and tied directly to the context each team provided.

In each case, the analyst is building a clearer picture of their environment.

The Question Worth Asking

The research on epistemic debt points to a simple question: Is your AI making your team more capable, or more dependent?

Your metrics won’t answer it. Alert closure rates go up either way. Mean time to respond drops either way. A team that’s getting better and a team that’s losing its edge can look the same on a dashboard.

You see the difference when something new shows up. A threat that doesn’t match a playbook. An alert that actually needs investigation. That’s when it becomes clear whether your team understands the system or is just moving outputs along.

Making the reasoning visible helps. It gives people something to follow and learn from. But that alone isn’t enough. They also need to trust it. That becomes harder, and more important, as AI systems start making decisions without a human in the loop.

Meet Juno AI


文章来源: https://www.uptycs.com/blog/is-ai-making-security-teams-more-efficient-or-blind
如有侵权请联系:admin#unsafe.sh