Why Your Ransomware Defence Is Looking at the Wrong Thing
Ninety-three percent of organisations have seen at least one ransomware-ready intrusion in the last two years. Average ransom demands now top $1.5 million. Backups are compromised in nearly 40% of incidents. Twenty-three days of downtime per successful attack.
These are 2026 numbers. After a decade of next-generation endpoint detection, AI-powered threat intelligence, and billion-dollar acquisitions. The industry has never spent more on the problem. The problem has never been worse.
Something fundamental is wrong with the approach.
The Signal Problem
Every major ransomware defence — endpoint detection, behavioural analysis, process monitoring — works the same way. It watches how software behaves and tries to decide whether that behaviour is malicious. Is this process encrypting files too fast? Is this binary calling suspicious APIs? Does this network pattern match a known command-and-control signature?
These are all signals. And signals can be faked, evaded, or simply changed faster than defences can adapt.
Fileless attacks now account for the majority of ransomware intrusions. Attackers use legitimate tools — PowerShell, WMI, RDP — that look identical to normal administration. Ransomware-as-a-Service platforms iterate weekly, generating novel variants that have never been seen before. AI-generated malware adapts its execution patterns in real time.
The result: 76% of ransomware attacks now include data theft alongside encryption. Attackers are inside the network, moving laterally through stolen credentials, and exfiltrating data long before any detection tool raises an alert. By the time you see the signal, the damage is already done.
This is not a failure of any particular product. It's a failure of the entire detection philosophy. Watching how something happens will always be an arms race. The attacker controls the "how."
What If You Watched the Traffic Instead?
There's a different way to approach the problem. Instead of monitoring process behaviour, monitor the pattern of file access itself.
Consider a file share — the kind every enterprise runs, where sensitive documents, financial records, and customer data actually live. Most breach data comes from file shares, not databases. Yet most security tools focus on endpoints and network perimeters, leaving the data layer surprisingly unmonitored.
Every file share has a natural rhythm. Finance accesses the quarterly reports on predictable cycles. Engineering touches build artefacts during working hours. The CEO's assistant opens board documents the week before a meeting. Over time, this normal traffic forms a baseline — a fingerprint of how the organisation actually uses its data, day by day, hour by hour.
Here's the key insight: an attacker who wants to encrypt or exfiltrate a file share cannot do so within that baseline. The operation requires touching hundreds of thousands of files across directories that no single user or process would normally access in that volume or timeframe. The attacker's usage profile is fundamentally, unavoidably different from every legitimate user's. That deviation from the baseline is the detection.
This is not behavioural analysis of a process. It's physics. To steal or encrypt a terabyte of files, you have to read or write a terabyte of files. No amount of credential theft, tool obfuscation, or fileless technique changes the volume of file access required. The baseline sees it the way a seismograph sees an earthquake — not by identifying the fault line, but by measuring the ground moving.
Why Baselines Beat Signatures
The advantage is architectural, not incremental.
Zero-day resilience. A novel ransomware variant has, by definition, no known signature and no established behavioural pattern. But it still has to touch files at a scale and pace that shatters the baseline. The method is invisible; the volume is not.
Pre-impact detection. Signal-based tools detect attacks during execution — sometimes in time to stop encryption, often not. Baseline monitoring detects anomalous access patterns before bulk encryption begins. The reconnaissance phase, the lateral movement, the initial probing of file shares — these all produce access anomalies that precede the destructive payload. An account that normally reads twelve files a day suddenly reading twelve thousand is visible long before encryption starts.
No endpoint agent required. Because monitoring happens at the data layer — on the file share itself — there's no software agent on the endpoint that can be disabled, evaded, or compromised. The detection mechanism is invisible to the attacker and operates independently of the machine being attacked.
Exfiltration detection. Modern ransomware increasingly steals data before encrypting it. Signal-based tools struggle with exfiltration because it often uses legitimate protocols and credentials. But the access pattern of exfiltration — mass file reads from directories that normally see minimal traffic — deviates from baseline in ways no legitimate workflow would.
A Secondary Layer: Statistical Markers
Baselining catches the macro pattern. But there's a complementary technique that adds a second line of detection at finer granularity.
By seeding file shares with statistical marker files — documents that look ordinary but serve as tripwires — defenders create a mesh of silent alarms. These markers are distributed across directories, indistinguishable from real documents, and are never accessed during normal operations. Any read or modification of a marker file is, by definition, anomalous.
Marker files don't replace baseline detection. They reinforce it. A baseline anomaly tells you something abnormal is happening across the file share. A triggered marker tells you exactly where the attacker is right now. Together, they provide both the wide-angle view and the precise coordinates — detection and localisation in a single architecture.
The Elephant in the File Share
The cybersecurity industry has a blind spot. Billions are spent on endpoint detection, network monitoring, identity management, and cloud security posture. But the place where the most valuable data actually lives — file shares, NAS devices, shared drives — is monitored primarily by access control lists and the occasional DLP scan.
Ransomware attackers know this. It's why they target file shares. It's why data exfiltration happens through file shares. And it's why a detection approach that starts at the data layer, rather than the endpoint or the network, addresses the problem where it actually occurs.
This isn't about replacing existing tools. EDR, SIEM, and identity management all serve important functions. But they're watching the perimeter while the data sits largely unguarded. Baseline-driven detection fills that gap — not by adding another layer of signal analysis, but by changing what you're looking at entirely.
What This Means for Defenders
If you're a CISO, a security architect, or a board member reviewing your ransomware posture, ask one question: Do we know what normal looks like for our file access?
Not who has access. Not what processes are running. Not what the network traffic looks like. Do we have a baseline of how our data is actually used — and would we see an attacker deviating from it?
If the answer is "we'd know if something encrypted our files," you're already too late. Encryption is the final act. Exfiltration is the quiet prelude. And the reconnaissance that precedes both is invisible to tools that don't monitor at the data layer.
The organisations that will weather the ransomware crisis of 2026 and beyond are the ones that shift their detection philosophy from signatures to baselines. From watching how attacks happen to measuring the access patterns attacks inevitably produce. From chasing the attacker's methods to monitoring the attacker's unavoidable footprint.
The methods will always change. The footprint won't.