Skip to content

Commit bbe0dec

Browse files
anadavksacilotto
authored andcommitted
hugetlbfs: flush TLBs correctly after huge_pmd_unshare
When __unmap_hugepage_range() calls to huge_pmd_unshare() succeed, a TLB flush is missing. This TLB flush must be performed before releasing the i_mmap_rwsem, in order to prevent an unshared PMDs page from being released and reused before the TLB flush took place. Arguably, a comprehensive solution would use mmu_gather interface to batch the TLB flushes and the PMDs page release, however it is not an easy solution: (1) try_to_unmap_one() and try_to_migrate_one() also call huge_pmd_unshare() and they cannot use the mmu_gather interface; and (2) deferring the release of the page reference for the PMDs page until after i_mmap_rwsem is dropeed can confuse huge_pmd_unshare() into thinking PMDs are shared when they are not. Fix __unmap_hugepage_range() by adding the missing TLB flush, and forcing a flush when unshare is successful. Fixes: 24669e5 ("hugetlb: use mmu_gather instead of a temporary linked list for accumulating pages)" # 3.6 Signed-off-by: Nadav Amit <namit@vmware.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit a4a118f) CVE-2021-4002 Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Acked-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
1 parent a341223 commit bbe0dec

1 file changed

Lines changed: 19 additions & 4 deletions

File tree

mm/hugetlb.c

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3589,6 +3589,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
35893589
struct hstate *h = hstate_vma(vma);
35903590
unsigned long sz = huge_page_size(h);
35913591
struct mmu_notifier_range range;
3592+
bool force_flush = false;
35923593

35933594
WARN_ON(!is_vm_hugetlb_page(vma));
35943595
BUG_ON(start & ~huge_page_mask(h));
@@ -3617,10 +3618,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
36173618
ptl = huge_pte_lock(h, mm, ptep);
36183619
if (huge_pmd_unshare(mm, &address, ptep)) {
36193620
spin_unlock(ptl);
3620-
/*
3621-
* We just unmapped a page of PMDs by clearing a PUD.
3622-
* The caller's TLB flush range should cover this area.
3623-
*/
3621+
tlb_flush_pmd_range(tlb, address & PUD_MASK, PUD_SIZE);
3622+
force_flush = true;
36243623
continue;
36253624
}
36263625

@@ -3677,6 +3676,22 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
36773676
}
36783677
mmu_notifier_invalidate_range_end(&range);
36793678
tlb_end_vma(tlb, vma);
3679+
3680+
/*
3681+
* If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
3682+
* could defer the flush until now, since by holding i_mmap_rwsem we
3683+
* guaranteed that the last refernece would not be dropped. But we must
3684+
* do the flushing before we return, as otherwise i_mmap_rwsem will be
3685+
* dropped and the last reference to the shared PMDs page might be
3686+
* dropped as well.
3687+
*
3688+
* In theory we could defer the freeing of the PMD pages as well, but
3689+
* huge_pmd_unshare() relies on the exact page_count for the PMD page to
3690+
* detect sharing, so we cannot defer the release of the page either.
3691+
* Instead, do flush now.
3692+
*/
3693+
if (force_flush)
3694+
tlb_flush_mmu_tlbonly(tlb);
36803695
}
36813696

36823697
void __unmap_hugepage_range_final(struct mmu_gather *tlb,

0 commit comments

Comments
 (0)