Blog | Treasure Data

Innovation Without Compromise: How We Use AI While Keeping Security Non-Negotiable

Written by Nahi Nakamura | Mar 9, 2026 5:55:39 PM

The conversation around AI-assisted development often presents a false choice: move fast with AI, or maintain rigorous security practices. We reject that framing.

At Treasure Data, we've embraced AI tools to accelerate development. Engineers use AI to write code, review code, and iterate faster than ever before. But our security posture hasn't changed. Here's how we think about it.

Human judgment and approval is always present

When AI writes code, humans remain accountable. This isn't about distrust of AI — it's about maintaining clear responsibility. When something goes wrong, there must be a human who can investigate, explain, and ensure it doesn't happen again.

In practice, this means:

  • Every production release has human approval. Direct review at the point of change, by engineers who understand what they're approving.
  • "AI did it" is never an acceptable answer. Someone reviewed it, someone approved it, someone is accountable.

The speed gain from AI comes from faster iteration, not from removing human accountability.

AI review reduces burden. It doesn't remove accountability.

We use AI to review code before human reviewers see it. This catches routine issues earlier — style inconsistencies, common bugs, missing edge cases. Human reviewers can then focus on architecture, security implications, and business logic.

This changes the nature of human review, not its necessity. The human reviewer remains accountable for what ships. AI is additive — it reduces burden and catches issues earlier, but humans make the final call.

Security slowdowns are usually culture problems, not design problems

When security slows teams down, the cause is rarely technical. It's usually decision avoidance.

Security decisions require accountability. Explaining a choice to customers, legal, and privacy teams is complex. When people lack confidence to make those calls, they fall back on process — following precedent, adding "just in case" protections, waiting for someone else to decide.

The result? Over-engineering that creates its own problems. Encryption without key management. Redundant controls that confuse auditors. Delays that compound.

The solution isn't faster processes. It's building a culture where people escalate when they're unsure, and where those who understand security make decisions directly. For important matters, senior leadership participates. We will not compromise security for speed — but we also won't accept slowdowns that come from decision avoidance.

Customer data protection: System controls, not just policy

As a customer data platform handling data for hundreds of global brands, we maintain strict boundaries around data access.

For the most sensitive customer data:

  • Access is systemically blocked — not just prohibited by policy
  • Only designated Treasure Data engineering personnel can access, through monitored channels
  • Data cannot leave its designated region
  • We monitor for leaks, not just policy violations

For operational data (logs, configurations, metadata):

  • Access is role-based and monitored
  • Local persistence is prohibited
  • Temporary access for support purposes is permitted under controlled conditions

This layered approach means our most sensitive data is protected by architecture, not just training. Policy and training matter — but they're the second line of defense, not the first.

AI tools under the same rules

When we adopted AI tools for development and support, we applied the same principles:

  • AI runs on our infrastructure. We use managed AI services within our AWS cloud environment— including Amazon Bedrock —, not external endpoints. Training on our data is disabled.
  • Access controls don't change. AI tools inherit the same access restrictions as any other tool. If an engineer can't access certain data directly, they can't access it through AI either.
  • Sensitive customer data remains systemically protected. AI tools operate under the same infrastructure controls as any other access method.

AI doesn't get special exceptions. It operates within our existing security architecture.

Moving fast and staying safe

AI doesn't make security easier or harder. It makes everything faster — including the feedback loops that catch problems.

The question isn't whether to use AI. It's whether your security practices are robust enough to handle accelerated development. If your security depends on slowing down, AI will expose that weakness. If your security is built into the architecture and culture, AI amplifies your ability to ship safe software quickly.

We've chosen to build security into both. AI helps us move faster without compromising what matters.