Let’s walk through how Treasure Data’s secure-by-design methodology converts potential impediments into manageable, secure functionalities.
Top challenge 1: Controlling AI agent processing of customer data
The challenge: The risk associated with leveraging AI agents to process customer data is twofold: - Protecting customer data from exfiltration by a compromised agent
- Preventing the agent from being manipulated into processing unauthorized or out-of-scope data. 
- Strict input controls: All prompts and parameters must be validated and sanitized before reaching the agent to filter malicious instructions.
- Explainability: On a platform full of rich personal data such as a CDP, transparency in data processing is critical. AI agents must demonstrate the step-by-step chain of thoughts in its reasoning. Any change actions must be confirmed by the human in the workflow design.
- Principle of least privilege: The agent must be granted credentials that only permit access to specific, pre-approved segments of data, providing a final layer of enforcement at the data source.
Top challenge 2: Maintaining granular, context-aware permissions
The challenge: Within a modern CDP, numerous users with diverse roles interact with various AI agents. Static, one-size-fits-all permission models represent a significant liability. They create a high risk of privilege escalation, unauthorized data exposure, and misuse of agents beyond their intended scope. Such vulnerabilities can lead to severe privacy violations and security breaches involving sensitive customer data. Risk management strategy (Defense-in-Depth): This challenge is addressed by implementing a Defense-in-Depth technology stack. The core tenets of Zero Trust are crucial for managing granular access:- Continuous monitoring and validation: Access to customer data via AI agents cannot be treated as a static, one-time permission grant; it requires constant authentication and authorization, verifying the user's identity, the agent's context, and the specific data being accessed in real-time.
- Least-privilege access: Users are granted the minimum necessary access to data through specific agents. Concurrently, the agents themselves are given the required minimum permissions based on the user's permissions and the particular task. This model ensures a user can only access data relevant to their task and that the agent acts strictly within that defined scope.
How Treasure Data delivers Zero Trust security for AI Agent Foundry
We support customers with Zero Trust expectations for their AI initiatives by embedding security principles directly into our platform architecture.  We deliver a rigorous Zero Trust security posture for the AI Agent Foundry by adhering to the core pillar of "Never Trust, Always Verify." This is achieved through per-request authentication and permission checks for all human and agent actions, which defends against both internal and external misuse by eliminating any form of implicit trust.  This foundation is supported by a comprehensive set of compensating controls that emphasize least-privilege policies and deep visibility through auditability for all AI agent workflows.1. Policy-based permissions (PBP): Enforcing least privilege access
At the core of our Zero Trust strategy for AI Agent Foundry is our Policy-Based Permissions (PBP) model, designed for fine-grained control over access and actions. This system enforces the principle of Least Privilege Access by applying permissions at the most granular level possible—down to individual projects, agents, and datasets. Permissions in the AI Agent Foundry are assigned through policy configurations to different user roles, such as prompt engineers, data product managers, or general end users. These permissions include:- Agent and knowledge base management: The ability to create, edit, and delete custom agents, knowledge bases, and user prompts.
- Integration management: Control over internal integrations, like connecting to Parent Segments for audience generation, as well as external connections, such as Webhook or Slack integrations.
- Generic chat access: Permission for users to interact with chat features without having any administrative rights to create or modify the underlying agents.
2. Premium audit logs: Deep visibility and traceability into AI Agent Foundry actions
We provide deep visibility and accountability for all activities in the AI Agent Foundry through premium audit logs, which are designed for high-integrity security monitoring.- Comprehensive event capture: Actions performed within AI Agent Foundry are captured in detailed, immutable audit logs.
- Seamless SIEM integration: Customers can export audit logs for as long as they wish and integrate them directly with their own Security Information and Event Management (SIEM) platforms.