You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/binary-exploitation/common-binary-protections-and-bypasses/libc-protections.md
+21-6Lines changed: 21 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,10 +10,12 @@
10
10
11
11
The enforcement of chunk alignment in 64-bit systems significantly enhances Malloc's security by **limiting the placement of fake chunks to only 1 out of every 16 addresses**. This complicates exploitation efforts, especially in scenarios where the user has limited control over input values, making attacks more complex and harder to execute successfully.
12
12
13
-
-**Fastbin Attack on \_\_malloc_hook**
13
+
-**Fastbin Attack on `__malloc_hook`**
14
14
15
15
The new alignment rules in Malloc also thwart a classic attack involving the `__malloc_hook`. Previously, attackers could manipulate chunk sizes to **overwrite this function pointer** and gain **code execution**. Now, the strict alignment requirement ensures that such manipulations are no longer viable, closing a common exploitation route and enhancing overall security.
16
16
17
+
> **Note:** Since glibc **2.34** the legacy hooks (`__malloc_hook`, `__free_hook`, etc.) are removed from the exported ABI. Modern exploits now target other writable function pointers (e.g. tcache per-thread struct, vtable-style callbacks) or rely on `setcontext`, `_IO_list_all` primitives, etc.
18
+
17
19
## Pointer Mangling on fastbins and tcache
18
20
19
21
**Pointer Mangling** is a security enhancement used to protect **fastbin and tcache Fd pointers** in memory management operations. This technique helps prevent certain types of memory exploit tactics, specifically those that do not require leaked memory information or that manipulate memory locations directly relative to known positions (relative **overwrites**).
@@ -40,6 +42,18 @@ Pointer mangling aims to **prevent partial and full pointer overwrites in heap**
40
42
3.**Requirement for Heap Leaks in Non-Heap Locations**: Creating a fake chunk in non-heap areas (like the stack, .bss section, or PLT/GOT) now also **requires a heap leak** due to the need for pointer mangling. This extends the complexity of exploiting these areas, similar to the requirement for manipulating LibC addresses.
41
43
4.**Leaking Heap Addresses Becomes More Challenging**: Pointer mangling restricts the usefulness of Fd pointers in fastbin and tcache bins as sources for heap address leaks. However, pointers in unsorted, small, and large bins remain unmangled, thus still usable for leaking addresses. This shift pushes attackers to explore these bins for exploitable information, though some techniques may still allow for demangling pointers before a leak, albeit with constraints.
Even with safe-linking enabled (glibc ≥ 2.32), if you can leak the mangled pointer and both the corrupted chunk and victim chunk share the same 4KB page, the original pointer can be recovered with just the page offset:
48
+
49
+
```c
50
+
// leaked_fd is the mangled Fd read from the chunk on the same page
51
+
uintptr_t l = (uintptr_t)&chunk->fd; // storage location
52
+
uintptr_t original = (leaked_fd ^ (l >> 12)); // demangle
53
+
```
54
+
55
+
This restores the Fd and permits classic tcache/fastbin poisoning. If the chunks sit on different pages, brute-forcing the 12-bit page offset (0x1000 possibilities) is often feasible when allocation patterns are deterministic or when crashes are acceptable (e.g., CTF-style exploits).
56
+
43
57
### **Demangling Pointers with a Heap Leak**
44
58
45
59
> [!CAUTION]
@@ -76,12 +90,13 @@ Pointer guard is an exploit mitigation technique used in glibc to protect stored
76
90
4.**Alternative Plaintexts:** The attacker can also experiment with mangling pointers with known values like 0 or -1 to see if these produce identifiable patterns in memory, potentially revealing the secret when these patterns are found in memory dumps.
77
91
5.**Practical Application:** After computing the secret, an attacker can manipulate pointers in a controlled manner, essentially bypassing the Pointer Guard protection in a multithreaded application with knowledge of the libc base address and an ability to read arbitrary memory locations.
The dynamic loader parses `GLIBC_TUNABLES` before program startup. Mis-parsing bugs here directly affect **libc** before most mitigations kick in. The 2023 "Looney Tunables" bug (CVE-2023-4911) is an example: an overlong `GLIBC_TUNABLES` value overflows internal buffers in `ld.so`, enabling **privilege escalation** on many distros when combined with SUID binaries. Exploitation requires only crafting the environment and repeatedly invoking the target binary; pointer guard or safe-linking do not prevent it because corruption happens in the loader prior to heap setup.
0 commit comments