
Insider threats are often seen as one-off mistakes or impulsive actions by unhappy employees. But in reality, most insider threats unfold quietly over time, hiding in plain sight within normal work habits and legitimate access.
The most recent case taking the security world by storm this week involves a former engineer at Google - demonstrating just how hard insider risk can be to spot, and how expensive it becomes when it goes unnoticed.
This story isn’t just about stolen intellectual property. It’s a real-world example of how insiders take advantage of monitoring gaps, work around traditional data loss prevention (DLP) tools, and use everyday SaaS apps to move sensitive data outside the organization.
The scariest part? Variations of this scenario play out across companies every day, but most of the time, no one realizes it’s happening until it’s far too late.
What Actually Happened?
A former software engineer at Google, by the name of Linwei Ding (aka Leon Ding), used his trusted access to quietly steal sensitive AI-related data over the course of a year.
Here’s how it unfolded:
- He copied proprietary source files and technical documents from internal systems.
- Instead of sending them externally, he pasted the content into a note-taking app on his corporate laptop to avoid raising red flags.
- He converted those notes into PDF files, making them easier to move and store.
- He uploaded the PDFs to a personal cloud storage account, bypassing Google’s traditional DLP controls.
- Over time, this added up to more than 1,200 documents: roughly 14,000 pages of confidential material.
This activity continued for 12 months without being detected. And more was yet to come:
- During this same period, he was building relationships with technology companies overseas (in China) and preparing to launch his own startup.
- To maintain appearances, he arranged for someone else to badge into the office using his credentials while he was actually working remotely from abroad.
- By the time authorities stepped in, the data had already been taken and the damage was done.
What makes this case especially concerning isn’t just what was stolen - it’s how normal everything looked WHILE it was happening.
Every action used approved tools, legitimate access, and everyday workflows. Nothing about it looked like a “hack.” It looked like work.
Where Security Controls Failed
This incident did not happen because of one singular failure. It was the result of several SaaS security blind spots common in modern enterprises.
1) Lack of context around risky actions
First, traditional DLP systems struggled because the data never left the environment in an obvious way. The employee didn’t email files externally or copy them to removable media. Instead, he staged the data inside approved applications and moved it to a cloud account that looked like any other SaaS destination. Without behavioral context, those uploads appeared routine - even though they were incredibly dangerous.
Employee actions need to be analyzed IN CONTEXT: what is this employee accessing? From where did they access it from? Does them accessing these files make sense in relation to their role? Scope? Current projects? Does their manager know they’re accessing them?
If Ding’s manager (or the security team) was able to see Ding’s unusual access patterns and abnormal behavior, he never would've been able to exfiltrate these files in the first place.
2) Lack of communication between SaaS vectors & tools
Second, no system connected identity, application usage, and long-term behavior into one cohesive risk score + signal. Instead, the theft was slow and deliberate, spread across hundreds of small actions.
Each action alone looked harmless, but taken together, they formed a clear pattern of exfiltration. Unfortunately for Google - this was only in hindsight.
If there was a third party tool in place that could connect the dots between:
- the identity of that user
- their baseline behavior
- what SaaS apps they were accessing, and
- their long-term behavioral patterns
…all into one risk score, that user could have been:
1) added on a watch list
2) been closely monitored by security teams
3) automated remediation pathways could have been set up
All in order to stop them from accessing and stealing this data.
3) No way to track suspicious IP’s and SaaS behavior
Third, geographic and identity inconsistencies were not correlated with data activity. Badge use, device access, and cloud behavior were not analyzed as part of a unified insider risk detection model.
This is what allowed Ding to successfully give a colleague his badge to scan into a Google building, making it appear that he was working in the US when he was actually in China.
How DoControl Could Have Stopped This Incident
This is EXACTLY the gap DoControl is designed to close.
DoControl monitors how users and employees interact with SaaS data, files, and applications in real time, and enables security teams to effectively REMEDIATE and cut off access in an instant when risky behavior or exfiltration is detected.
Instead of focusing only on file content, it focuses on behavior and intent. In a case like this, DoControl would have flagged:
- Unusual bulk access to sensitive repositories over time
- Repeated downloads and transformations of proprietary files
- Uploads to personal cloud storage accounts
- Risky combinations of device, location, and application behavior
- Long-term accumulation of sensitive documents inconsistent with normal role duties
Because DoControl operates directly at the SaaS and API level, it can detect patterns that legacy DLP tools miss. It can also enforce granular policies, alert security teams when employee behavior deviates from expected norms, and remediate exposure the second a risky event is triggered via automated workflows.
Rather than reacting after intellectual property is gone, organizations gain the ability to interrupt risky behavior early - before it turns into an incident that reaches the courtroom.
What Now?
Since the theft was uncovered, the former engineer has been convicted on multiple counts of economic espionage and trade secret theft.
A maximum sentence of 10 years in prison could be given to Ding for each count of theft of trade secrets and 15 years for each count of economic espionage.
It sounds severe, because it is. According to the Department of Justice (DoJ), Ding was said to have built links to two Chinese companies while working at Google, was in discussions to be the CTO of an early-stage tech company in China, was acting as the CEO of his own start up, and also had intended to help develop an AI supercomputer and custom machine learning chips for at least two state-controlled entities in China.
Sentencing is still pending, and the case remains a reminder of just how much damage can be done before anyone realizes something is wrong.
Final Thoughts
For the organizations watching this unfold, the legal outcome matters, but the bigger takeaway is operational: the activity went undetected for an entire year. By the time authorities stepped in, the data was already gone and a competing venture was already in motion.
This isn’t just a story about one employee. It’s a warning about how easily insider risk can hide inside everyday tools and normal access.
So what should organizations do now?
Focus on a few high-impact signals that consistently show up in insider threat cases like this one:
→ Slow, steady data movement: Not every breach is a massive download. Long-term accumulation of sensitive files is often the biggest red flag.
→ Cloud-to-cloud transfers: Data is most often stolen through SaaS apps and personal cloud storage, not USB drives or email anymore.
→ Behavior that doesn’t match the role: When users start accessing or sharing data outside their normal job function, risk increases fast.
These are the kinds of patterns security teams need to see early, not after the damage is done.
This case wasn’t a sophisticated cyberattack. It was a trusted employee using approved tools, legitimate access, and normal workflows to quietly walk sensitive data out the door.
That’s what makes insider threats so dangerous, and so easy to miss.
As companies invest more heavily in AI, intellectual property, and cloud-based collaboration, insider risk becomes a business risk. Protecting that data means understanding how people use it, where it moves, and when behavior starts to drift from normal.
In 2026, the most damaging events didn't come from outside the company. They come from someone who already has a badge.
Sources:
https://www.infosecurity-magazine.com/news/google-staffer-charged-stealing/


