Skip to main content

Agentic AI Security: New Risks and Controls in the Databricks AI Security Framework (DASF v3.0)

35 new agentic AI risks and 6 mitigation controls for agents that access data, call tools, and execute actions

Data Intelligence Platforms

Summary

  • The Databricks AI Security Framework (DASF) now covers Agentic AI as its 13th system component, adding 35 new technical security risks and 6 new mitigation controls to help organizations deploy autonomous agents with confidence.
  • This extension addresses the unique risks of agent memory, planning, and tool use, including threats introduced by the Model Context Protocol (MCP), the emerging standard for connecting agents to enterprise tools.
  • The DASF Agentic AI Extension whitepaper and updated compendium are available now. Download them to assess your agent architectures, map your tool ecosystems, and implement defense-in-depth controls purpose-built for autonomy.

We are excited to announce the release of the Databricks AI Security Framework (DASF) Agentic AI Extension whitepaper! Databricks customers are already deploying AI agents that query databases, call external APIs, execute code, and coordinate with other agents. We constantly hear the teams responsible for those deployments are asking hard questions: what happens when the AI can do things, not just say things? That is why we have extended DASF.

With this update, we introduce new guidance for securing autonomous AI agents:

  • 35 new agentic AI security risks covering agent reasoning, memory, and tool usage
  • 6 new mitigation controls including least privilege, sandboxing, and human oversight
  • Security guidance for Model Context Protocol (MCP) tool servers and clients
  • Coverage for multi-agent system risks and agent communication threats

Together these additions help organizations deploy AI agents safely while maintaining governance, observability, and defense-in-depth security controls.

This brings the full framework to 97 risks and 73 controls. We have updated the DASF compendium (Google sheet, Excel) to include these new risks and controls, mapping them to industry standards to facilitate immediate operationalization. These additions are cataloged as DASF v3.0 under the "DASF Revision" column.

Databricks AI Security Framework
Fig 1: The 13 canonical components of an end-to-end AI system, with Agentic AI introduced as the 13th component.

Security risks when AI agents can take actions

Traditional AI systems like RAG operate mostly in a read-only mode. But AI agents can take actions such as querying databases, calling APIs, executing code, and interacting with external tools.

Agents work differently. When a user engages an agent, the model kicks off a loop: it breaks the request into sub-tasks, picks a tool (say, "Query Sales Database"), executes it, evaluates the output, and decides whether to call a different tool next. This continues until the task is done. The agent is making real-time decisions about which data to access and which tools to invoke — decisions that used to be made by humans or hardcoded into application logic.

That creates a new class of risk we call Discovery and Traversal. An agent designed to find solutions will traverse data paths and tool interfaces that were never intended for the requesting user. It's not exploiting a bug. It's doing exactly what it was built to do. But without proper controls, the user effectively inherits the agent's permissions rather than their own.

The Lethal Trifecta. Recent industry research, including Meta’s “Agents Rule of Two” and similar models like Simon Willison’s “Lethal Trifecta”, highlights the conditions under which this gets dangerous. The risk profile spikes when three conditions are present simultaneously:

  1. Access to sensitive systems or private data: The agent can retrieve private or restricted data.
  2. Process untrustworthy inputs: The agent processes data from outside the trust boundary — user prompts, external websites, incoming emails.
  3. Change state or communicate externally: The agent can modify state through tools or MCP connections — sending emails, executing SQL, modifying code.

With all three in place, an indirect prompt injection embedded in untrusted data can hijack the agent's full capability set, turning it into a "confused deputy" that performs authorized actions with malicious intent. Remove any single-leg by scoping permissions down, adding a human checkpoint, validating intent before tool selection, and breaking the attack chain.

How the extension is organized

The 35 new risks and 6 controls are organized around three sub-components that map to how agents actually work:

13A: The Agent Core (brain and memory)

These risks target the agent's reasoning loop. Memory Poisoning (Risk 13.1) introduces false context that alters current or future decisions. Intent Breaking & Goal Manipulation (Risk 13.6) coerces the agent into deviating from its objective. And because agents operate in multi-turn loops, Cascading Hallucination Attacks (Risk 13.5) can compound a minor error across iterations into a destructive action.

13B: MCP Server risks (the tool interface)

Agents interact with external systems through tools, increasingly standardized via the Model Context Protocol (MCP). On the server side, attackers may deploy Tool Poisoning (Risk 13.18) — injecting malicious behavior into tool definitions — or exploit Prompt Injection (Risk 13.16) within tool descriptions to bypass security controls.

13C: MCP Client risks (the connection layer)

On the client side, if the agent connects to a Malicious Server (Risk 13.26) or fails to validate server responses, it risks Client-Side Code Execution (Risk 13.32) or Data Leakage (Risk 13.30). As MCP adoption grows, securing the client-server boundary matters as much as securing the agent's reasoning.

Inter-agent dynamics

Agents will increasingly communicate with other agents. That creates risks of Agent Communication Poisoning (Risk 13.12) and Rogue Agents in Multi-Agent Systems (Risk 13.13) — agents that operate outside monitoring boundaries, a problem that compounds with scale.

A 5X LEADER

Gartner®: Databricks Cloud Database Leader

Controls for securing AI agents and autonomous systems

The DASF has always been about defense-in-depth. But when an AI system can take action, read-only access controls aren't enough. The new controls address this directly:

  • Least privilege for tools (DASF 5, DASF 57, DASF 64): Agents need granular permissions scoped to their immediate task, limiting the blast radius the same way RBAC and ABAC limit a human's. Just because an agent can call the HR Metrics Tool doesn't mean it should when answering a sales query.
  • Human-in-the-loop oversight (DASF 66): For high-stakes actions, require human verification before tool execution. The control design accounts for approval fatigue — if you overwhelm the human reviewer, you've created a new vulnerability, not solved one.
  • Sandboxing and isolation (DASF 34, DASF 62): Agent-generated code runs in ephemeral, isolated environments. If an agent decides to write and execute a script, that execution shouldn't have access to the broader system and the outbound connections to unknown destinations.
  • AI Gateway and Guardrails (DASF 54): Agents need protections against scenarios where an agent is being manipulated into surfacing data it shouldn't. Agents' interactions via gateway and guardrails such as monitoring, safety filtering, and PII detection needs to be applied. These guardrails can be applied to either the input or output of an agent (or both). Also it's equally important to monitor what's actually being returned by the agent.
  • Observability of thought (DASF 65): Standard logging tells you what happened. Agentic tracing captures why — the planning steps, the tool-selection reasoning, the chain of thought that led to an action. Without this, you can't audit an agent's decisions or detect when its reasoning has been compromised.

For Databricks customers, the compendium maps these controls to platform capabilities, including Unity Catalog governance for agent data access, Agent Bricks Framework, AI Gateway guardrails, and Vector Search security settings.

Built with the community

This extension reflects input from reviewers and contributors across Databricks and the security community, including teams at Atlassian, Experian, and ComplyLeft. We also drew heavily on work from MITRE ATLAS, OWASP, NIST, and the Cloud Security Alliance — the updated compendium maps all 97 risks and 73 controls to these industry standards.

Get started

Download the DASF Agentic AI Extension whitepaper for the full treatment of all 35 new agentic AI risks and 6 new controls, and grab the updated compendium (Google Sheet, Excel) which now maps agentic risks and controls alongside the original DASF. Use these resources to:

  1. Assess your current agent architectures against the agent AI risk model.
  2. Map your tool ecosystems — including MCP servers and clients — to the identified threat vectors.
  3. Implement the recommended controls to ensure your agents operate within safe, governed boundaries.

For deeper context, read the full DASF whitepaper and explore the Agent Bricks Framework documentation to see how these controls work on the platform.

Reach out to your Databricks account team or email us at [email protected] with feedback — this framework belongs to the community as much as it does to us.

Never miss a Databricks post

Subscribe to our blog and get the latest posts delivered to your inbox