Skip to content

Commit baa198a

Browse files
David Hildenbrand (Red Hat)gregkh
authored andcommitted
mm/hugetlb: fix two comments related to huge_pmd_unshare()
commit 3937027 upstream. Ever since we stopped using the page count to detect shared PMD page tables, these comments are outdated. The only reason we have to flush the TLB early is because once we drop the i_mmap_rwsem, the previously shared page table could get freed (to then get reallocated and used for other purpose). So we really have to flush the TLB before that could happen. So let's simplify the comments a bit. The "If we unshared PMDs, the TLB flush was not recorded in mmu_gather." part introduced as in commit a4a118f ("hugetlbfs: flush TLBs correctly after huge_pmd_unshare") was confusing: sure it is recorded in the mmu_gather, otherwise tlb_flush_mmu_tlbonly() wouldn't do anything. So let's drop that comment while at it as well. We'll centralize these comments in a single helper as we rework the code next. Link: https://lkml.kernel.org/r/20251223214037.580860-3-david@kernel.org Fixes: 59d9094 ("mm: hugetlb: independent PMD page table shared count") Signed-off-by: David Hildenbrand (Red Hat) <david@kernel.org> Reviewed-by: Rik van Riel <riel@surriel.com> Tested-by: Laurence Oberman <loberman@redhat.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: Oscar Salvador <osalvador@suse.de> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Cc: Liu Shixin <liushixin2@huawei.com> Cc: Lance Yang <lance.yang@linux.dev> Cc: "Uschakow, Stanislav" <suschako@amazon.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent 3a18b45 commit baa198a

1 file changed

Lines changed: 8 additions & 16 deletions

File tree

mm/hugetlb.c

Lines changed: 8 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -5653,17 +5653,10 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
56535653
tlb_end_vma(tlb, vma);
56545654

56555655
/*
5656-
* If we unshared PMDs, the TLB flush was not recorded in mmu_gather. We
5657-
* could defer the flush until now, since by holding i_mmap_rwsem we
5658-
* guaranteed that the last refernece would not be dropped. But we must
5659-
* do the flushing before we return, as otherwise i_mmap_rwsem will be
5660-
* dropped and the last reference to the shared PMDs page might be
5661-
* dropped as well.
5662-
*
5663-
* In theory we could defer the freeing of the PMD pages as well, but
5664-
* huge_pmd_unshare() relies on the exact page_count for the PMD page to
5665-
* detect sharing, so we cannot defer the release of the page either.
5666-
* Instead, do flush now.
5656+
* There is nothing protecting a previously-shared page table that we
5657+
* unshared through huge_pmd_unshare() from getting freed after we
5658+
* release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()
5659+
* succeeded, flush the range corresponding to the pud.
56675660
*/
56685661
if (force_flush)
56695662
tlb_flush_mmu_tlbonly(tlb);
@@ -6888,11 +6881,10 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
68886881
cond_resched();
68896882
}
68906883
/*
6891-
* Must flush TLB before releasing i_mmap_rwsem: x86's huge_pmd_unshare
6892-
* may have cleared our pud entry and done put_page on the page table:
6893-
* once we release i_mmap_rwsem, another task can do the final put_page
6894-
* and that page table be reused and filled with junk. If we actually
6895-
* did unshare a page of pmds, flush the range corresponding to the pud.
6884+
* There is nothing protecting a previously-shared page table that we
6885+
* unshared through huge_pmd_unshare() from getting freed after we
6886+
* release i_mmap_rwsem, so flush the TLB now. If huge_pmd_unshare()
6887+
* succeeded, flush the range corresponding to the pud.
68966888
*/
68976889
if (shared_pmd)
68986890
flush_hugetlb_tlb_range(vma, range.start, range.end);

0 commit comments

Comments
 (0)