Home
Blog
Indicators of Leakage: An End to Policy-Based DLP

Indicators of Leakage: An End to Policy-Based DLP

For years, Data Loss Prevention (DLP) tools and platforms have relied on policies as their backbone. Security teams would define dozens, sometimes even hundreds, of rules meant to cover every possible way sensitive data might leave the organization. A policy to block uploads over 50MB. A policy to alert when a file contains credit card numbers. A policy to flag emails to external organizations.

The problem is that policies treat all activity in the same rigid way. They focus on what is happening, but not why it’s happening. A developer downloading gigabytes of source code may be doing their job, or preparing to walk away with valuable IP on their last day of work. A late-night upload might be legitimate business, or the beginning of a breach.

You can cover these use cases with policies, but you’ll never be able to create enough to achieve full coverage, and you’ll flood your system with false positives in the process. As your organization continues to grow, the destinations for data exfiltration grow exponentially in both number and complexity.

Policies can’t tell the difference because they lack context and understanding of intent.

We take a different approach. Instead of drowning in policies, Orion uses AI to look for Indicators of Leakage (IOLs). These subtle, context-aware signals are enriched with user details from your Identity Management, data-at-rest inventories, and patterns of typical access. Focusing on intent, not just activity, they reveal when data movement may be inappropriate, giving security teams a far clearer picture of real risk without manually writing hundreds of policies.

In this blog, we will define this term and explain how this new methodology can help you achieve the true DLP coverage systems today require.

What Are Indicators of Leakage (IOLs)?

Indicators of Leakage (IOLs) are AI-detected signals that go beyond surface-level activity to capture the context and intent behind data movement. Instead of asking what is happening, like “a file was uploaded” or “a database was queried”, IOLs help answer the more critical question: why is this happening, and does it make sense in context?

Unlike traditional DLP alerts triggered by predefined policies, IOLs are uncovered by Orion’s AI agent through continuous observation of how data moves across your environment. In simple terms, traditional policies often treat large downloads or file transfers the same. IOLs don’t.

They analyze who is taking the action, what role they play, how they usually interact with data, and whether the behavior aligns with expected patterns.

The agent builds a baseline of normal behavior by analyzing:

  • Data movement logs across applications, repositories, and endpoints
  • User identity details from IDP and HRMS systems
  • Contextual patterns such as team norms, geography, and time of activity

From this foundation, the AI surfaces deviations that matter – the subtle signals that static policies could never capture. Security teams don’t need to manually define IOLs; the system learns them dynamically, while security teams can also manually enrich the IOL database – but the heavy lifting is done by the AI.

IOLs shift the focus from activity-based monitoring to intent-aware detection. By surfacing the subtle deviations that policies miss, they allow security teams to see the difference between normal business operations and actions that may indicate exfiltration – be it purposeful or accidental.

IOL Examples

So what does an Indicator of Leakage actually look like in practice? Because IOLs are automatically detected by Orion’s AI agent, they don’t need to be hard-coded into policies. They emerge naturally when the system sees behavior that deviates from established norms.

Some examples include:

  • Scope creep in data access A developer who normally works in one repository suddenly starts pulling large amounts of data from systems outside their typical scope.
  • Unusual destinations for sensitive files An employee transfers sensitive files to a personal email, an unknown domain, or a storage provider not commonly used by the organization.
  • Outlier behavior within a role A member of the finance team downloads significantly more customer data than peers in the same role, despite no change in responsibilities.
  • Suspicious timing and location A late-night upload from a country where the employee doesn’t typically work, combined with an unusual spike in data access, makes the action high-risk.
  • Slow-drip exfiltration A user moves small volumes of data in ways that seem harmless individually, but when connected together, form a clear pattern of data siphoning.

In each of these cases, traditional policies would struggle: either the action wouldn’t be categorized as breaking any of the existing rules, and thus be categorized as allowed, or a generic rule that’s too broad would create noisy false positives. IOLs solve this by layering in context – who the user is, what they normally do, where the data should flow, and surfacing only the deviations that truly suggest leakage.

Why Indicators of Leakage Matter

DLP systems often force security teams into having a dedicated DLP team for defining hundreds of policies that often result in a huge amount of false positives while failing to actually block relevant threats.

Indicators of Leakage (IOLs) break this cycle. Instead of asking whether a specific policy was triggered, IOLs ask whether a user’s behavior makes sense in the context of who they are, what they do, and how the organization normally operates.

This matters for several reasons:

  • Coverage at scale As organizations grow, data flows multiply across SaaS apps, endpoints, AI applications, and cloud services. It’s impossible to write enough policies to keep up. AI-Based IOLs scale naturally with your environment.
  • Reduced false positives Because IOLs factor in role, team norms, industry, and context, they cut down the flood of irrelevant alerts that frustrate employees, hinder productivity, and overwhelm analysts.
  • Detection of subtle, modern threats Modern threats, like deepfake remote workers, often evade traditional policies by acting “legitimately.” IOLs can surface the quiet deviations in intent that policies miss.
  • Cost Effectivity Maintaining a large library of policies requires dedicated staff, constant updates, and significant overhead. By automating the detection of risk signals, IOLs dramatically reduce the cost and effort of managing DLP, freeing up teams to focus on higher-value security initiatives.

In short, IOLs give security teams the visibility they’ve always needed but could never achieve through manual policies alone. They replace rules with adaptive, AI-driven detection that understands why data is moving, not just what is moving.

It’s Time to Move On

IOLs represent the next natural step in Data Loss Prevention. Instead of forcing security teams to write endless policies, Orion’s AI surfaces the subtle, intent-driven signals that actually matter. By focusing on context, IOLs deliver the visibility and precision defenders need to stop both accidental leaks and sophisticated exfiltration attempts in real time and at scale, finally closing the blind spot that policies will never be able to cover.

5 minutes deployment, 30 minutes to full coverage. But first, a demo.