Cybersecurity researchers at Palo Alto Networks have identified a series of security risks within Google Cloud’s Vertex AI platform, warning that misconfigured permissions could allow autonomous AI agents to behave like insider threats and access sensitive data beyond their intended scope.
The findings, published by the company’s threat intelligence unit, Unit 42, center on Vertex AI’s Agent Engine—an emerging platform designed to build and deploy AI agents capable of independently interacting with enterprise applications, data systems, and cloud services.
According to the report, researchers were able to demonstrate how a seemingly legitimate AI agent could be manipulated to extract its own credentials and subsequently use them to expand its access within a cloud environment. In effect, the agent operates as a “double agent,” functioning both as a trusted system component and a covert vector for data exfiltration.
The issue does not stem from a single exploitable flaw but rather from a chain of design gaps and permissive default configurations. Unit 42 found that service accounts linked to deployed agents were often granted broad, and in some cases excessive, permissions. By leveraging these privileges, the researchers were able to access customer cloud storage, retrieve sensitive deployment configurations, and gain visibility into internal systems supporting the AI platform.
The implications extend beyond a narrow technical vulnerability, pointing instead to a structural shift in enterprise cybersecurity risks as artificial intelligence becomes more deeply embedded in operational workflows.
Unlike traditional software components, AI agents are increasingly autonomous, often executing tasks without continuous human oversight. If compromised, they do not resemble external attackers breaching a system’s perimeter, but rather trusted insiders operating within it—making detection significantly more difficult.
The research underscores that over-permissioned AI agents can dramatically expand an organization’s attack surface, particularly when deployed at scale across interconnected systems. This raises fresh concerns about how enterprises manage trust, identity, and access in AI-driven environments.
Google was notified of the findings through responsible disclosure. In response, the company has updated its documentation to provide clearer guidance on how Vertex AI manages service accounts and permissions, though no specific software patch was cited.
Security experts say the episode highlights the urgent need for stricter governance around AI deployments. Central to this is enforcing the principle of least privilege—ensuring that AI agents are granted only the minimum access required to perform their functions. The report recommends the use of dedicated service accounts, tighter control over OAuth scopes, and rigorous pre-deployment security reviews akin to those applied to production-grade software.
It also points to a growing reliance on specialized security tools to monitor AI environments, including platforms such as Prisma AIRS and Cortex AI-SPM, which aim to detect misconfigurations and identity risks across cloud systems.
More broadly, the findings reflect a deeper architectural challenge facing enterprises: security vulnerabilities are increasingly emerging not from isolated software bugs, but from the way complex systems interact. Even when individual components operate as designed, their combined behavior can create unintended exposure pathways.
As organizations accelerate their adoption of AI-driven automation, analysts warn that traditional security models may prove insufficient. Managing permissions, isolating workloads, and redefining trust boundaries for autonomous systems are likely to become central priorities in the next phase of enterprise cybersecurity.



