Skip to content

Commit 68b1fa0

Browse files
riteshharjanimaddy-kerneldev
authored andcommitted
powerpc/64s: Fix _HPAGE_CHG_MASK to include _PAGE_SPECIAL bit
commit af38538 ("mm/memory: factor out common code from vm_normal_page_*()"), added a VM_WARN_ON_ONCE for huge zero pfn. This can lead to the following call stack. ------------[ cut here ]------------ WARNING: mm/memory.c:735 at vm_normal_page_pmd+0xf0/0x140, CPU#19: hmm-tests/3366 NIP [c00000000078d0c0] vm_normal_page_pmd+0xf0/0x140 LR [c00000000078d060] vm_normal_page_pmd+0x90/0x140 Call Trace: [c00000016f56f850] [c00000000078d060] vm_normal_page_pmd+0x90/0x140 (unreliable) [c00000016f56f8a0] [c0000000008a9e30] change_huge_pmd+0x7c0/0x870 [c00000016f56f930] [c0000000007b2bc4] change_protection+0x17a4/0x1e10 [c00000016f56fba0] [c0000000007b3440] mprotect_fixup+0x210/0x4c0 [c00000016f56fc30] [c0000000007b3c3c] do_mprotect_pkey+0x54c/0x780 [c00000016f56fdb0] [c0000000007b3ed8] sys_mprotect+0x68/0x90 [c00000016f56fdf0] [c00000000003ae40] system_call_exception+0x190/0x500 [c00000016f56fe50] [c00000000000d05c] system_call_vectored_common+0x15c/0x2ec This happens when we call mprotect -> change_huge_pmd() mprotect() change_pmd_range() pmd_modify(oldpmd, newprot) # this clears _PAGE_SPECIAL for zero huge pmd pmdv = pmd_val(pmd); pmdv &= _HPAGE_CHG_MASK; # -> gets cleared here return pmd_set_protbits(__pmd(pmdv), newprot); can_change_pmd_writable(vma, vmf->address, pmd) vm_normal_page_pmd(vma, addr, pmd) __vm_normal_page() VM_WARN_ON(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); # this get hits as _PAGE_SPECIAL for zero huge pmd was cleared. It can be easily reproduced with the following testcase: p = mmap(NULL, 2 * hpage_pmd_size, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); madvise((void *)p, 2 * hpage_pmd_size, MADV_HUGEPAGE); aligned = (char*)(((unsigned long)p + hpage_pmd_size - 1) & ~(hpage_pmd_size - 1)); (void)(*(volatile char*)aligned); // read fault, installs huge zero PMD mprotect((void *)aligned, hpage_pmd_size, PROT_READ | PROT_WRITE); This patch adds _PAGE_SPECIAL to _HPAGE_CHG_MASK similar to _PAGE_CHG_MASK, as we don't want to clear this bit when calling pmd_modify() while changing protection bits. Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/7416f5cdbcfeaad947860fcac488b483f1287172.1773078178.git.ritesh.list@gmail.com
1 parent bbcbf04 commit 68b1fa0

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

arch/powerpc/include/asm/book3s/64/pgtable.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -107,8 +107,8 @@
107107
* in here, on radix we expect them to be zero.
108108
*/
109109
#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
110-
_PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
111-
_PAGE_SOFT_DIRTY)
110+
_PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_SPECIAL | \
111+
_PAGE_PTE | _PAGE_SOFT_DIRTY)
112112
/*
113113
* user access blocked by key
114114
*/

0 commit comments

Comments
 (0)