Skip to Content Treasure Data Logo Treasure Data Logo
  • Product
    • AI Agents
      • AI Overview
      • Marketing Super Agent
      • Agent Hub
    • AI Marketing Cloud
      • Overview
      • Omnichannel Engagement
      • Real-time Personalization
      • Paid Media Targeting & Optimization
      • Creative Automation for Marketing
      • Support, Clienteling & B2B Interactions
    • Customer Data Platform
      • CDP Overview
      • Integrations
      • Hybrid Architecture
      • Trust & Security
  • Solutions
    • Industries
      • Automotive
      • CPG
      • Entertainment & Media
      • Financial Services
      • Healthcare
      • Retail
      • Technology
      • Travel & Hospitality
  • Customers
  • Resources
    • Explore
      • Resource Library
      • Case Studies
      • Blog
      • Pricing
      • Documentation
      • Training
      • Events
      • Webinars
    • Get Started
      • Demo
      • AI Workshop
      • Fast Proof of Concept
      • RFP Template
      • Trade-Up Program
      • Value Calculator
  • Company
    • Company
      • About Us
      • Careers
      • Partners
      • News
      • Contact Us
      • Terms
Login
Get a demo
  • Menu Item 1
    • Sub-menu Item 1
      • Another Item
    • Sub-menu Item 2
  • Menu Item 2
    • Yet Another Item
  • Menu Item 3
  • Menu Item 4
Blog
    • Customer Data Strategy
    • CDP
    • Partners
    • AI & Machine Learning
    • Treasure Data CDP
    • CDP Use Cases
    • Data Privacy & Security
    • CDP | Customer Data Strategy
    • CDP|Customer Data Strategy
    • Company News
    • Marketing
    • AI & Machine Learning | Data Privacy & Security
    • AI & Machine Learning | CDP | Data Strategy
    • AI & Machine Learning | Marketing
    • AI & Machine Learning | Privacy & Security
    • AI and Machine Learning | CDP
    • CDP Use Cases|Marketing
    • CDP | CDP Use Cases
    • CDP | CDP Use Cases | Marketing
    • CDP | Customer Data Strategy | Treasure Data CDP
    • CDP | Marketing
    • CDP | Partners
    • CDP|CDP Use Cases|Treasure Data CDP
    • Customer Data Strategy | Treasure Data CDP
    • Customer Service
    • Marketing | Treasure Data CDP

Get the latest in your inbox.

February 3, 2026

An Operational Security Blueprint for AI-First Leaders

Anthony Milazzo Anthony Milazzo
  • Data Privacy & Security
AI security blueprint

For the first time in modern enterprise history, AI has surpassed ransomware as the top security concern. In 2025, 29% of security leaders cited AI, LLMs, and privacy issues as their number one threat—displacing the specter that has haunted CISOs for over a decade.

This isn't just a shift in priorities. It is the "check engine" light for the entire security industry.

For twenty years, we built security frameworks for deterministic code. We assumed software would execute predictable paths, that access controls could be scoped to human users, and that input validation could distinguish malicious payloads from legitimate requests.

Those assumptions are now obsolete.

In 2026, the margin for error is vanishing. The adversaries we face are no longer just human hackers; they are AI-augmented operators capable of discovering vulnerabilities and executing attacks at machine speed. The companies that survive this transition won't be the ones with the biggest security budgets or the fanciest vendor dashboards.

They will be the ones who realized you can no longer buy AI security the traditional way. You have to build it, and your partners have to embed it in their own operations.

The deterministic model is dead

The fundamental error most enterprises are making right now is treating AI security as a product category to procure rather than a transformation of how security teams operate.

The industry responded to AI anxiety the way it always responds: with SKUs. Vendors hastily rebranded existing security products with "AI" marketing. CISOs allocated budgets for "AI Firewalls" without the level of understanding and rigor required.

Now, enterprises are less secure today than they were twelve months ago. Here is why the old model is failing:

1. The threat model changed

Traditional security relies on deterministic logic: If Input A, then Output B. You can write a firewall rule for that.

AI systems, however, are probabilistic. An AI agent might process a request safely 99 times and then, on the 100th time, generate dangerous instructions because a user phrased a malicious prompt in a way that sounded "helpful" to the model. You cannot write a firewall rule for "don't be manipulated by persuasive language."

2. The velocity gap is lethal

In 2025, less than 5% of enterprise applications integrated autonomous AI agents. By the end of 2026, Gartner projects that number will hit 40%.

That is an eight-fold increase in attack surface in twelve months. Engineering teams, powered by AI coding assistants, are shipping features faster than human security teams can review them. If your security team is fixing bugs manually instead of designing secure systems, you are the bottleneck.

And when security becomes a bottleneck, engineering routes around it.

3. Shadow AI has escaped the lab

The problem isn't just developers anymore. Employees at an estimated 69% of companies are bringing in shadow AI tools with a credit card and a click.

The new Shadow AI isn't a rogue server under a desk; it's a browser extension that reads every email to "summarize" it, or a free SaaS tool that transcribes confidential Zoom meetings. Teams are plugging corporate data into unvetted AI platforms to hit their KPIs, often granting broad OAuth permissions without realizing they are handing over the keys to the kingdom.

While engineering outpaces security reviews, the rest of the organization is leaking data through the side door.

Long live the deterministic model

This creates a paradox: The deterministic model is dead for behavior, but it is more critical than ever for infrastructure.

AI applications are, at their core, software. They run on code, consume APIs, and execute on infrastructure. If you abandon the basics—secure configuration, identity management, and patching hygiene—you are building a super-intelligence on a foundation of sand. Because AI agents operate with high autonomy and speed, a traditional vulnerability (like a misconfigured S3 bucket or a hardcoded secret) that might have been a "Medium" severity issue in 2023 becomes an automated catastrophe in 2026.

We cannot abandon the discipline of engineering security. We have to accelerate it.

  • Shift left is no longer a buzzword: You cannot wait until a vulnerable AI feature is shipped to test it. Security must partner extensively with the product team to collaborate when the intent of the agent is defined, not just when the code is written.
  • Auto-remediation is non-negotiable: With the volume of code generated by AI, manual triage is impossible. We must rely on auto-remediation tools to prevent and fix obvious flaws instantly.
  • The "basics" are the firebreak: You might not be able to prevent every prompt injection, but if your traditional security controls are solid, you limit what the compromised agent can actually do.

Fighting fire with fire

The organizations that will succeed in 2026 are those where security teams start acting as builders of the secure foundation that makes AI possible.

At Treasure Data, we started this reckoning early. Our organizational AI adoption in engineering and beyond quickly outpaced our ability to hire security analysts. We realized we couldn't scale by hiring more people to stare at logs. We had to change the math.

Here is the operational blueprint for surviving the transition:

Principle 1: Ruthless experimentation

You cannot secure what you do not understand, and you cannot understand AI by reading about it. You have to build with it.

Long before AI became a corporate mandate, our security team was already in the trenches. In early 2024, we were experimenting with LangChain. Later on, we were pioneers building agents inside the early alpha-stage precursor to what is now Treasure Data’s AI Agent Foundry.

We built early agents for the security team to leverage—not just to solve problems, but to understand the anatomy of the systems we would soon be asked to protect. We learned the failure modes of our product by breaking it ourselves, and fixing the findings before shipping the GA release to customers.

Principle 2: Security must be enablers, not gatekeepers

The fastest way to create a security breach in 2026 is to say "No."

If security blocks AI innovation, the business will simply deploy it without you. At Treasure Data, the IT & Security teams became the primary enablers of AI. We:

  • Ran the internal AI bootcamps.
  • Modeled the blueprint for safe agents
  • Brought in Glean to give everyone safe AI access to corporate data while automatically maintaining source system permissions
  • Published the "How-To" guides

By positioning security as the team that says, "Here is how to do it safely," rather than "You can't do that," we brought Shadow AI into the light.

Principle 3: You must use AI to secure AI

Scaling your security program isn't just about procuring new tools; it’s about reclaiming bandwidth. The first step is using AI to eliminate the operational overhead that prevents your team from keeping pace.

We didn't automate advanced threat hunting first. We automated the drudgery. We built plugins for Claude Code that utilize Glean MCPs to manage the overhead of daily operations—Jira ticket hygiene, meeting preparation, intelligence gathering, and security research. We deployed GitHub Copilot Autofix to bring AI-powered remediation into the pull request workflow, instantly suggesting fixes for vulnerabilities so developers can solve security issues without waiting for a human review cycle.

All this doesn't mean abandoning traditional automation; it means leaning into it harder than ever. We use Tines to fuse deterministic workflows with AI intelligence. The result is a hybrid workflow that enables human experts to make decisions to keep pace in an AI future.

By using AI to handle the 40% of the day security engineers usually spend on project management and basic remediation, we freed up our humans to do more strategic work.

The survival ultimatum

The real forcing function is the weaponization of AI by state-sponsored adversaries. As recent intelligence from Anthropic and others confirms, threat actors are no longer just targeting AI; they are using AI to accelerate espionage, automate vulnerability discovery, and conduct sabotage at machine speed.

We are entering an era of AI-augmented conflict. Adversaries are using AI to extract sensitive data from within victims' file systems, generate sophisticated spear-phishing campaigns, and write malicious code to carry out their attacks.

In 2026, the question won't be if AI security threats will escalate, but whether your organization can survive an attack executed at speeds surpassing human response capabilities. Organizations that have not implemented operational AI security infrastructure will face a critical decision: either disable their systems or suffer a breach that goes undetected until the damage is done.

If you are a leader evaluating partners or hiring teams to navigate this, ignore the marketing. Apply the Operational Experience Test:

  1. Have they secured AI in production? Not in a lab. In the wild, with real customer data.
  2. Do they use the tools themselves? This applies to any vendor selling AI products. If they aren't augmenting their own internal operations on AI, they don't understand the risks required to secure the products they are selling you.
  3. Do they talk about their failures? If they claim their systems are perfect, run. You want a partner who knows exactly how stubborn hallucinations are because they’ve spent a year engineering the retrieval context to prevent them.

Theory doesn't fix hallucinations. Experience does.

The era of buying security is over. The era of building it has begun.

 

Topics Covered

  • Data Privacy & Security
Anthony Milazzo
Anthony Milazzo

Anthony Milazzo is the director, security architecture at Treasure Data.

Recent Posts

A path forward indicating market shifts
AI & Machine Learning 8 min read
Market Shifts: From CDPs to Execution, From SaaS to AI-Native Platforms
AI & Machine Learning 8 min read
Sensible Pricing for the AI-Native World
Treasure Data Logo Symbol

+1 866.899.5386 (US)
+1 650.772.4500 (Non-US)

  • Product
    • AI Agents
      • AI Overview
      • Marketing Super Agent
      • Agent Hub
      • Responsible AI
      • UX Research
    • AI Marketing Cloud
      • Overview
      • Omnichannel Engagement
      • Real-time Personalization
      • Creative Automation for Marketing
      • Paid Media Targeting & Optimization
      • Support, Clienteling & B2B Interactions
    • Customer Data Platform
      • CDP Overview
      • Integrations
      • Hybrid Architecture
      • Trust & Security
  • Solutions
    • Industries
      • Automotive
      • CPG
      • Entertainment & Media
      • Financial Services
      • Healthcare
      • Retail
      • Technology
      • Travel & Hospitality
  • Resources
    • Explore
      • Resource Library
      • Case Studies
      • Blog
      • Pricing
      • Documentation
      • Training
      • Events
      • Webinars
    • Get Started
      • Demo
      • AI Workshop
      • Fast Proof of Concept
      • RFP Template
      • Trade-Up Program
      • Value Calculator
  • Company
    • Company
      • About Us
      • Customers
      • Partners
      • Careers
      • News
      • Contact Us
      • Terms
  • Get a demo
  • Terms & Conditions
  • Privacy Statement
  • Cookie Policy
  • Privacy Hub
  • Trademarks
  • Modern Slavery Statement
  • Your Privacy Choices
©2026 Treasure Data, Inc. (or its affiliates) All rights reserved.