The newest problem in DLP is far from being the biggest.

AI data exfiltration dominates security headlines, but the biggest DLP risks may still be the oldest ones - because they were never solved.

As cybersecurity professionals working in the DLP space, we tend to gravitate toward the newest threats. We expect ourselves to be able to cover each new threat, and our customers expect it as well. Shadow AI, chatbots, copilots, and external AI platforms are among the most popular topics we hear about from customers.

But in focusing on the future, we risk missing an uncomfortable truth: some of the most fundamental, elementary threats of DLP were never truly solved.

DOGE employee steals Social Security data via USB Drive

A recent whistleblower complaint reported that a government employee allegedly copied large amounts of Social Security data before leaving their position. The case is still under investigation, but the alleged method of exfiltration is copying the database onto a USB drive.

No sophisticated AI prompt injection, no complex cloud misconfiguration - two databases, called “Numident” and the “Master Death File,” that, according to The Washington Post, could include records for “more than 500 million living and dead Americans, including Social Security numbers, places and dates of birth, citizenship, race and ethnicity, and parents’ names.”

For anyone working in data security, this scenario is nothing new. Despite years of investment in data loss prevention tools and insider risk programs, basic exfiltration methods remain highly effective.

Nothing New Under the Sun

This isn’t an isolated case. In fact, many high-profile data incidents over the past few years have involved insiders abusing legitimate access rather than sophisticated external attackers.

In 2025, Coinbase customer support agents working with access to internal systems were bribed by attackers to provide customer data, including personal information and partial Social Security numbers, affecting a portion of the company’s users.

In 2023, two former Tesla employees leaked a dataset containing personal information about more than 75,000 employees to an external media outlet.

There are dozens of other examples out there. The pattern repeats across industries: when someone already has access to sensitive information, exfiltration doesn’t have to be sophisticated to be extremely damaging.

We talk about AI data exfiltration - The news talks about USB drives

None of this should be interpreted as downplaying the risks of AI-driven data exfiltration. If anything, the opposite may be true.

AI may ultimately represent one of the largest data-loss risks organizations have ever faced. The problem is that we currently have a visibility gap.

Security discussions tend to focus on incidents we can actually observe - yet a large portion of AI-related data exposure may still be happening outside the scope of what organizations can currently detect.

Yet, these cases are already beginning to make headlines as well. In a recent case, a threat actor reportedly jailbroke Anthropic’s Claude AI and used it alongside other AI tools to orchestrate attacks against multiple Mexican government agencies, ultimately exfiltrating around 150GB of sensitive data, including taxpayer records and voter information.

This is just one example that surfaced publicly. It’s unlikely to be the only one.

Just as with shadow IT a decade ago, the incidents we talk about most are the incidents we can see. AI-assisted data exposure, on the other hand, often occurs in ways that leave little trace within traditional security controls. The risk, in other words, may be real and growing, but much of it may still be invisible.

Remembering the Basics

The rise of AI absolutely introduces new data loss risks, and organizations should be addressing them.

But the lesson from this case is clear: DLP was never solved properly, and the fundamentals of data protection still matter today as much as they did before AI changed the threat landscape. The question is, what are security teams doing about it?

Traditional DLP approaches often focus on channels - blocking USB drives, monitoring email, scanning web uploads, or restricting access to specific applications. While these controls are important, they also illustrate a deeper challenge: every time a new technology appears, security teams must race to build yet another control for yet another channel.

But the core problem hasn’t changed - data loss rarely happens because a specific technology exists. It happens because someone with access to sensitive data decides - intentionally or accidentally - to move it somewhere it shouldn’t go.

That’s why many organizations are beginning to shift their thinking toward understanding context and intent, rather than focusing exclusively on the mechanism of transfer.

The threat landscape will continue to evolve. New technologies will appear, new workflows will emerge, and new channels for data movement will inevitably follow, but if the past decade has taught us anything, it’s that solving DLP requires understanding why data is being moved in the first place - and whether that action makes sense in its context.

The threat landscape may evolve, but the basic problem of people walking away with sensitive data hasn’t disappeared. As security professionals, we should certainly prepare for the next generation of threats, but we shouldn’t assume the previous ones are behind us.

More articles

We can stop data exfiltration
We can stop data exfiltration
We can stop data exfiltration
We can stop data exfiltration
We can stop data exfiltration
We can stop data exfiltration
Let Us Show You How