Trust Boundary of SaaS Will Include Customers' AI Agents
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,理解其主要观点。 文章讨论了SaaS供应商在面对客户AI代理时的信任边界问题。过去银行经历了四次安全措施扩展到客户设备的浪潮,现在第五次浪潮围绕AI代理展开。每次浪潮都是由责任、监管、客户保护需求和损失经济学驱动的。 接下来,我需要提取关键点:信任边界、AI代理作为新端点、安全措施扩展的历史、驱动因素以及未来的措施建议。 然后,我需要用简洁的语言把这些点整合起来,确保在100字以内。避免使用“文章总结”之类的开头,直接描述内容。 最后,检查字数是否符合要求,并确保信息准确传达。 </think> SaaS供应商需评估是否将客户AI代理纳入信任边界。银行过去四次因责任压力扩展安全措施至客户设备,第五波围绕AI代理展开。AI代理作为新交互端点缺乏强防御,SaaS产品面临攻击面风险。历史表明责任和监管将推动新安全措施,建议供应商考虑身份验证、行为分析和高风险操作的人工确认。 2026-4-24 00:0:0 Author: zeltser.com(查看原文) 阅读量:28 收藏

SaaS vendors should assess whether their trust boundary includes customers' AI agents. Liability has pushed banks toward securing the customer's device four times, and the fifth wave is forming around AI agents.

Trust Boundary of SaaS Will Include Customers' AI Agents - illustration

As SaaS vendors make their products usable by customers’ AI agents, they’ll face a trust-boundary decision. Is the vendor responsible for securing any aspect of the customer’s client system? The answer might seem like an easy “no,” but financial services have answered it four times, always with some form of “yes.”

Banks now fingerprint browsers, shield mobile apps, score typing rhythm, and bind credentials to device hardware. Each security measure followed a specific threat, loss, or legal action. This pattern will repeat for customers’ AI agents, and the last four rounds inform how we should prepare for the next one.

Agent infrastructure is shipping ahead of its defenses.

AI agents are a new endpoint for interacting with SaaS, but the threats against them lack strong defenses. For example, OpenAI flagged that prompt injection is unlikely to ever be fully “solved.” Simon Willison’s “lethal trifecta” of sensitive data access, untrusted content, and outbound connectivity describes the capabilities that enable exploitation.

Every SaaS product that interacts with a customer’s AI agent inherits that attack surface. The exposure is greatest for consumer-facing products because enterprise customers are subject to security controls from their organizations.

In the meantime, vendors are making increasingly powerful capabilities accessible natively to AI agents. In banking, for example, Meow lets customers open and run business accounts through AI agents with per-transaction limits. GoCardless targets bank-payment integration, introducing MCP as groundwork for agentic commerce.

Card networks are starting to write the rules for agent commerce before the defenses take shape. Visa Trusted Agent Protocol and Mastercard Agent Pay were announced in 2025. American Express followed in April 2026 with a network-level liability commitment that covers agent-initiated purchases.

How should vendors decide whether, when, and how to invest in securing customers’ AI agent systems? We can extrapolate from how the banking industry has answered versions of that question over recent decades.

Four drivers push providers toward the customer’s device.

Four drivers have shaped when and how banks extended security measures onto the customer’s device:

  • Liability: The US Regulation E in 1979 and the UK APP reimbursement rule in 2024 pushed fraud loss onto banks. Banks funded defensive controls in response.
  • Regulatory standard of care: Actions from FFIEC 2005 through the EBA RTS on SCA in 2018 each raised the minimum controls banks had to deploy.
  • Customer inability to self-protect: Banking trojans in the late 2000s and mobile malware in the early 2010s pushed banks toward device fingerprinting, transaction signing, and out-of-band confirmation.
  • Loss economics: Losses grew costly enough to justify app shielding and behavioral biometrics at scale, since liability assigned them to banks.

These drivers produced four waves of customer-device controls. A fifth wave is forming around AI agents, and history predicts how it’ll play out.

Four waves pushed banks onto the customer’s device.

The following four waves pushed banks to deploy new security measures on customers’ devices. The pressure came from a mix of threats, research, court cases, and regulations:

Regulation and liability are the constants across all four waves. Regulators raised the standard of care, while courts and rules put liability on banks. Banks deployed different controls in different waves, but this pressure drove every round.

Liability will shape agent-era defenses.

Courts and regulators still need to decide who pays when a compromised AI agent authorizes or takes an action that looks intentional. Once they do, liability will drive the timing and scope of agent-era defenses.

For risky transactions, banks stopped trusting users’ devices and built defenses that operated outside them. Similarly, agent-era defenses will need to work outside the potentially compromised AI agent. Measures can include agent identity verification, agent behavior analytics, transaction-bound signing, and out-of-band human confirmation for high-risk actions.

Financial services implemented transaction-bound signing in the pre-agent era. Germany’s chipTAN binds the signing step to a separate device that confirms the recipient and amount before the bank accepts. An agent-era equivalent would bind signing to something the agent can’t observe or forge.

As SaaS vendors prepare for AI agents, four actions are worth considering:

  • Map your customer’s AI agent scenarios to the liability and reimbursement rules applicable to your product.
  • Inventory where customer-side agents reach your product, including direct API traffic, MCP servers, and browser automation. Commerce products should add payment protocols such as Stripe ACP, PayPal MCP, AP2 intents, and Visa Trusted Agent Protocol to that list.
  • Favor provider-side controls over any step that asks the agent or principal to act, since either can be compromised.
  • Require agent identity, intent signing, and out-of-band confirmation for high-risk actions.

Customer-side AI agents trigger the fifth wave of pressure on providers to secure customers’ devices. Liability has shaped the previous four, and it’ll shape the current one too.


文章来源: https://zeltser.com/saas-ai-agent-trust-boundary
如有侵权请联系:admin#unsafe.sh