Every organization running antivirus has already accepted that antivirus is not enough — because if it were, there would be no breaches. The organizations that stay cleanest aren't the ones with the best antivirus. They're the ones that have structured their environment to reduce what malware can do even when it gets through the first layer.
This is the structural thinking behind that approach.
Understand the Attack Surface First
The attack surface is the sum of all the ways an attacker can interact with your environment: email systems, external-facing services, USB ports, authenticated accounts, installed software, browser activity, remote access tools. Malware enters through one of these vectors. Reducing the attack surface means closing the ones you don't need and hardening the ones you do.
This sounds obvious, but most organizations have substantial attack surface they don't actively maintain: legacy applications nobody updated, services that were "temporarily" exposed and forgotten, accounts with privileges accumulated over years of promotions and role changes. The first honest inventory of what's actually exposed is often surprising.
Least Privilege: The Control With the Highest Leverage
Least privilege is the principle of giving each account, process, and service the minimum permissions needed to perform its function — nothing more.
The technical impact on malware is significant. Most malware executes with the privileges of the user who triggered it. Standard user accounts cannot install services, write to system directories, modify registry hives outside their own profile, or disable security software. Malware running as a standard user has a dramatically constrained damage radius compared to malware running with administrator rights.
This single control — running users as non-administrators — prevents entire categories of persistence mechanisms and privilege-based attacks without requiring any additional tooling. It's not perfect (local privilege escalation exploits exist), but it raises the cost of attack substantially.
Control What Can Execute
Application control is the practice of defining which executables are permitted to run on a system. In its strictest form, this means only explicitly approved, signed executables can run. Anything else — an executable dropped in a temp folder, a script downloaded by a browser, an unsigned binary — gets blocked before it executes.
This approach breaks the execution phase of most malware delivery chains. The phishing email can deliver its payload, the payload can be written to disk, but if execution is blocked, the attack stops there.
Strict application control requires management overhead and user communication. A lighter implementation — blocking execution from user-writable directories like %TEMP% and %APPDATA% — provides substantial benefit with lower operational cost.
Patch Management as a Security Control
Unpatched software is exploitable software. This isn't a metaphor — CVE databases publish the exact vulnerabilities in every version of common software, and exploit code is often publicly available within days of a patch release.
The gap between "patch available" and "patch deployed" is the window during which attackers with knowledge of the vulnerability can reliably exploit it. Reducing that window — through automated patching, prioritized patching of internet-facing software, and rapid deployment for critical vulnerabilities — directly reduces exploitability.
Browsers, email clients, office suites, and PDF readers are the highest-priority targets because they process attacker-controlled content by design.
Network Segmentation
An isolated workstation that gets infected is a contained problem. A flat network where every machine can reach every other machine turns that same infection into a lateral movement opportunity that can affect the entire environment.
Segmentation means restricting which systems can communicate with which other systems. The implementation varies — VLANs, firewalls between segments, host-based firewall rules — but the principle is consistent: limit the blast radius of a compromise by limiting what the compromised machine can reach.
Critical systems (domain controllers, file servers, backup infrastructure) should not be reachable from standard user workstations unless there's a specific, justified need.
Backups as a Technical Security Control
Backups are not just a recovery mechanism — they're a ransomware mitigation control. The measure of a backup strategy's security value is its independence from the systems it backs up.
Backups that are accessible from domain-joined machines, mounted as network drives, or managed through an agent running on potentially compromised hosts are vulnerable to the same ransomware that targets your primary data. Immutable, offsite, or offline backups — stored in a way that compromised endpoints cannot modify — are not.
The 3-2-1 rule (three copies, two media types, one offsite) is a reasonable baseline. The critical addition for ransomware specifically is verifying that at least one copy cannot be deleted or modified from any machine in the primary environment.
The Layered Model
| Layer | Control | What It Stops |
|---|---|---|
| Entry | Email filtering, phishing training, MFA on external services | Initial access |
| Execution | Application control, macro policies, script restrictions | Payload execution |
| Privilege | Least privilege, admin account separation | Privilege escalation |
| Spread | Network segmentation, firewall rules | Lateral movement |
| Persistence | Endpoint detection, registry monitoring | Survival across reboots |
| Recovery | Offline backups, tested restore procedures | Ransomware impact |
No single layer is sufficient. The value of defense-in-depth is that attackers must defeat multiple independent controls. Each layer that holds makes the attack more expensive, more detectable, and more likely to fail or be stopped in progress.