LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle
2025年监管机构加强了对AI和网络安全的规范,包括披露截止日期、责任链扩展及合规要求增加。然而,安全团队面临政策与实践脱节的挑战。专家指出框架实施资源不足、AI政策假设不可见性及小型团队负担过重等问题。合规压力加剧,但治理尚未完善。 2025-12-12 19:6:42 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

By Byron V. Acohido

Regulators made their move in 2025.

Disclosure deadlines arrived. AI rules took shape. Liability rose up the chain of command. But for security teams on the ground, the distance between policy and practice only grew wider.

Part two of a four-part series

In Part One of our 2025 Roundtable, we traced how accountability got personal. CISOs, board members, and frontline responders found themselves navigating a risk environment where blame had names — and consequences.

That shift set the tone for what’s happening now: accountability not just as legal exposure, but as day-to-day friction between regulatory ideals and operational realities.

This second installment turns the spotlight on that disconnect.

What happens when frameworks arrive faster than the resources to implement them? When AI policies presume visibility no one actually has? When overlapping mandates burden small teams already stretched thin? Across sectors, our experts describe a mounting pressure to comply, but with guardrails still forming and incentives often misaligned.

These aren’t complaints. They’re field notes from professionals trying to get it right in a landscape that’s moving faster than governance can catch up.

What emerges is a candid set of insights from the messy middle — where policy meets practice, and good intentions still leave room for new risk.

Valente

Alla Valente, Principal Analyst, Forrester

Cyber risk shifted in 2025 from a compliance issue to a business impact issue, driven by SEC disclosure enforcement and the financial fallout from attacks. Boards are funding cyber risk initiatives because earnings are now directly affected. At the same time, AI governance is rising without federal rules as litigation costs surge. Compliance alone does not protect value. Organizations need to align business strategy with risk maturity to innovate responsibly.

DiLullo

John DiLullo, CEO, Deepwatch

The U.S. is likely to adopt a European style resilience mandate that makes incident reporting mandatory and turns transparency into a competitive differentiator. Organizations that build reporting workflows and resilience frameworks early will be ahead when enforcement lands. Waiting for mandates leaves teams unprepared. Mature programs treat compliance as part of operational readiness, not an audit exercise. Early investment eases future requirements and prepares leadership for higher accountability expectations.

Balaban

Murat Balaban, CEO, Zenarmor

Compliance requirements multiplied in 2025, but running to each rule misses what regulators actually want. Architecture needs to be auditable by design. Enforcement, logging, and inspection must prove themselves without special projects every time mandates change. Zero Trust at the source and machine verifiable decisions make compliance a natural outcome instead of a fire drill. The strongest paths are transparent architectures that are easy to explain, observe, and defend.

Carignan

Nicole Carignan, SVP, Darktrace

As organizations embed AI and agentic systems into workflows, governance must keep pace with complexity and innovation. There is no universal model because each business has different risk profiles and regulatory needs. Effective AI policy requires executive ownership and tailored controls, not generic templates. Organizations that adopt external AI solutions still need internal accountability. The goal is responsible deployment without slowing the innovation that drives competitive advantage.

Srivatsav

Mandates are table stakes now. Compliance deadlines create policies and reports, but they do not produce defensible proof when data and AI systems change daily. I keep seeing yesterday’s compliant posture turn obsolete when models and connections evolve. The smarter move is treating compliance as an engineering property. Assume compromise, reduce plaintext exposure, and make evidence automatic. Confidential AI will matter because regulators and customers judge protection in real time.

Adduri

Heavy-handed compliance efforts often backfire. Blocking tools and restricting IT access creates the very visibility gaps that compliance is meant to close. The smarter 2026 play is to build “paved roads” — secure infrastructure, vetted SaaS, and sanctioned AI copilots that are easier to use than shadow alternatives. The mindset shift is clear: make the governed path the path of least resistance.

Bud

Andrew Bud, CEO, iProov

Passkeys reduced phishing risk in 2025, but the real exposure is recovery. A major breach could surface when an attacker exploits a weak legacy fallback, proving that authentication strength depends on the entire credential lifecycle. The pressure now falls on enterprises to secure the back door, not just the front. High assurance biometric re-verification becomes a requirement as regulators recognize the limits of passwordless success.

Nadkarni

Renuka Nadkarni, Chief Product Officer, Aryaka

Economics will drive the next wave of AI security, not algorithms. Startups cannot keep charging more to secure AI than the cost of running AI. That imbalance forces consolidation into integrated models where AI becomes part of broader security architectures. Treating AI as a traffic class helps apply foundational controls like access, threat protection, and monitoring. Consolidation feels inevitable as organizations avoid expensive point tools and look for practical governance.

Pichardo

Cyber insurance is shifting toward quantifiable risk as AI raises potential losses. Insurers want hard evidence of resilience, not broad statements about security posture. Organizations with fragmented governance will feel that pressure in premiums and exclusions. Centralized visibility across subsidiaries becomes the new baseline. Resilience will be measured financially, across the full enterprise footprint, not by local plans inside individual business units. The policy language is already moving there.

Miracco

Accountability shifted in 2025 toward downstream victims rather than negligent software vendors. SolarWinds customers paid fines for disclosure gaps while root cause engineering failures went unpunished. CISOs now face greater liability for how they describe breaches than for allowing them to happen. This encourages paperwork over resilience. The message is that legal systems punish symptoms, not causes, creating incentives that protect filings more than code. Liability feels misaligned with actual risk.

Leichter

Willy Leichter, Chief Marketing Officer, PointGuard AI

AI regulation is multiplying, but most rules trace back to accountability rather than detailed control. Government processes lag technology, so waiting for mandates leaves organizations exposed. Anthropic’s disclosure of a state sponsored attack highlights a mindset of transparency and proactive defense. The smarter 2026 play is to use regulatory intent as a guide and get ahead of threats rather than race to satisfy annual reporting that is outdated on arrival.

Simberkoff

Dana Simberkoff, Chief Risk, Privacy and Information Security Officer, AvePoint

AI regulation will arrive unevenly, with Europe moving faster and U.S. rules developing through sector guidance. Compliance will depend on data governance, auditability, and documented purpose. Policies must address third party models, privacy risk, and how AI uses sensitive information. Export controls and provenance requirements are becoming part of routine operations. Organizations that build lifecycle governance today will adapt quickly when regulation moves from discussion to enforcement.

Radkowski

Chris Radkowski, GRC Expert, Pathlock

AI regulation is accelerating in Europe and across U.S. states, creating fragmented rules that reshape deployment. Documentation, conformity assessments, and transparency requirements are becoming baseline expectations. At the same time, data localization and sovereignty mandates add pressure on global operations. Continuous controls are uneven, with automation improving technical safeguards while process oversight lags. Compliance and security are converging into real-time assurance where resilience depends on navigating regulation and enforcing controls at scale.

Chris Tait, Principal, Baker Tilly

Regulation is accelerating inside enterprises while the broader ecosystem stays fragmented and unpredictable. Boards and executives are being pushed to document oversight and prove responsible AI usage, yet public tools operate with few guardrails. Disclosure rules and vendor assessments are getting tougher, even as consumer platforms expose data with little protection. Organizations face rising governance obligations while navigating inconsistent public regulation. Preparedness now requires managing both realities at once.

Greene

David Greene, CMO, Simbian

Let’s stop treating regulations as the roadmap. In 2026, the smarter move is optimizing for speed — not compliance checklists. That means deploying AI-speed response tools, supplementing annual tests with more frequent offensive exercises, and giving your team the space to act on what they know. The best strategy centers human insight and pragmatic tooling. Regulators may lag, but our defenses can’t afford to.

Loeble

Dexter Loeble, VP of Customer Success, All Covered

AI is moving into every SaaS workflow faster than governance frameworks can adapt. Configuration sprawl and rapid commercial adoption of robotics challenge the idea that compliance alone provides security. The pace of change demands adaptable controls that evolve as quickly as the stack changes. Survivors will be the organizations that keep posture aligned with business speed. Compliance marks a starting point, but sustained security depends on continuous adjustment.

Taylor

Howard Taylor, CISO, Radware

Compliance is moving out of audit checklists and into board accountability as regulators expect proof of resilience, not policy language. Global rules like DORA and NIS2 are converging into a higher standard that demands continuous monitoring, automated evidence, and third-party validation. Companies that treat compliance as strategic will gain advantage. Those that wait may face an unforgiving shift where accountability for cyber resilience lands squarely in the boardroom.

Bajwa

Husnain Bajwa, SVP of Product, Risk Solutions, SEON

Regulators and financial intelligence units are struggling to match the speed of AI driven systems. Current frameworks are fragmented, and fraud models built on static rules feel outdated. The next phase will steer toward unified standards rooted in transparency and configurability. Organizations need infrastructures that learn and evolve as fast as threats. The lesson from 2025 is that volume based fraud detection no longer keeps pace with modern abuse patterns.

Williams

Jake Williams, VP of R&D, Hunter Strategy

Generative AI creates inconsistent results that collide with enterprise risk tolerances. Many AI projects entered production without enough threat modeling, and regulatory exposure is now real. In 2026, some organizations will scale back or even abandon initiatives they can’t secure. Others will re-scope for safer use cases. The push toward adoption needs to match actual enterprise risk, not early enthusiasm. Enterprises want predictable behavior, not close to correct.

Rice

James Rice, VP of Data Security and Analytics, Protegrity

I see organizations racing to meet deadlines instead of building data protections into architecture. That mindset slows AI and analytics because teams keep stopping to clean up compliance gaps. The smarter move is treating compliance as part of everyday security design. If controls like tokenization and continuous monitoring are already in place, each new mandate becomes routine work instead of disruptive projects.

Knesek

Jill Knesek, CISO, BlackLine

Enterprise AI use is moving from experimentation to formal deployment, and boards will want clear governance models that show how risk is evaluated and controlled. Evidence will matter more than policy language, including verification of data sources and model decisions. Shadow AI exposes sensitive workflows without oversight, so blocking by default and allowing by exception becomes necessary. Boards will expect accountable governance that keeps pace with rapid AI adoption.

Nicastro

Joe Nicastro, Director of Technical Services, BlueFlag Security

Chasing each mandate is the slowest way to do security. U.S. rules tighten, but most land as guidance and self attestation. You can be compliant while your production environment burns. The EU is driving real change with GDPR, DORA, and the Cyber Resilience Act. That pressure forces operational resilience and product security into engineering. The smarter 2026 move is building continuous assurance so compliance becomes a byproduct.

Astorino

John Astorino, COO, Auvik

Shadow AI will move from isolated notebooks to autonomous processes that act across systems. Governance must account for provenance, explainability, and continuous monitoring as agents operate at scale. Auditability and anomaly detection need to be built into every automated workflow, not added later. Autonomous systems force organizations to instrument interactions with policy enforcement and visibility. Compliance becomes about controlling automated behavior as much as securing traditional user activity.

Black Duck

Governance must evolve as AI reshapes how software is built and what software even is. Guidance exists for testing and vulnerabilities, but technology moves faster than rules can be codified. Organizations should focus on a clear vision for secure AI rather than waiting for detailed mandates. Going back to fundamentals helps define how security and development work together. The challenge is embracing AI at scale without losing control of responsibility.

Heidhorn

Compliance shifted from awareness to mandatory governance across multiple sectors. The lesson from the Defense Industrial Base is that security has to be designed into systems from the beginning, not bolted on before audits. Sprinting to deadlines produces fatigue and superficial controls. The smarter approach treats compliance as an outcome of good engineering and continuous governance. That mindset prepares organizations for enforcement and aligns operations with evolving regulations by default.

McCurdy

Ryan McCurdy, VP, Liquibase

Regulation moved into the deployment pipeline in 2025. AI and disclosure rules now ask how systems evolve, not only who can access them. Organizations are more mature on identity than on change control. Chasing each mandate separately misses the point. The smarter move is modernizing the lifecycle itself and embedding policy checks into existing tools. Compliance then becomes an outcome of good engineering instead of a reactive project.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(Editor’s note: This feature was assembled with the assistance of ChatGPT, using human-led editorial judgment to shape, refine, and voice-check each entry.)

December 12th, 2025 | My Take | Top Stories


文章来源: https://securityboulevard.com/2025/12/lw-roundtable-part-2-mandates-surge-guardrails-lag-intel-from-the-messy-middle/
如有侵权请联系:admin#unsafe.sh